Now that everyone's voted in the ChatGPT poll, I thought I could bring this news to your attention without influencing the verdict.
Some research from Purdue University shows that 52 percent of ChatGPT’s answers to technical questions are incorrect. You’d get the same level of accuracy by flipping a coin.
However, what’s really interesting is that when asked to judge, people preferred ChatGPT’s incorrect answers to correct answers that were produced by humans!
ChatGPT’s quasi-authoritative style is to blame.
“It is apparent” said the researchers “that polite language, articulated and text-book style answers, comprehensiveness, and affiliation in answers make completely wrong answers seem correct”.
Wow.
Some research from Purdue University shows that 52 percent of ChatGPT’s answers to technical questions are incorrect. You’d get the same level of accuracy by flipping a coin.
However, what’s really interesting is that when asked to judge, people preferred ChatGPT’s incorrect answers to correct answers that were produced by humans!
ChatGPT’s quasi-authoritative style is to blame.
“It is apparent” said the researchers “that polite language, articulated and text-book style answers, comprehensiveness, and affiliation in answers make completely wrong answers seem correct”.
Wow.