Nassim Nicholas Taleb Critiques OpenAI’s ChatGPT: AI Chatbot Requires Expertise, Makes Hidden Errors

Benzinga – by Ananya Gairola, Benzinga Staff Writer.

Renowned author Nassim Nicholas Taleb, known for his book “Black Swan,” has expressed his doubts about OpenAI’s ChatGPT once again. This AI-powered chatbot, according to Taleb, comes with a catch—it is only useful if you possess deep knowledge of the subject matter.

Taleb recently shared his “verdict” on ChatGPT, stating that the chatbot often makes mistakes that are only discernible by an expert in the field. He provided an example of a linguistic error, where ChatGPT incorrectly interpreted the word “bar” as an Aramaic term for “son,” instead of the Yiddish “pronunciation shift.”

With this in mind, Taleb questioned the value of using ChatGPT if one already has a good understanding of the subject. He even revealed that he utilizes the chatbot for writing “condolence letters” but highlighted that it fabricates quotations and sayings.

In response to Taleb’s critique, some individuals suggested viewing ChatGPT as a sophisticated typewriter rather than an infallible source of truth. They emphasized that while the chatbot may not be the most intelligent assistant, it can be directed and corrected to expedite work processes.

On the other hand, some agreed with Taleb, stating that ChatGPT is too risky for certain assignments, particularly those that require a high degree of accuracy.

Taleb’s criticisms are not unprecedented, as he has previously highlighted the limitations of ChatGPT. He mentioned the chatbot’s inability to grasp historical ironies and nuances, as well as its lack of wit during conversations. Additionally, there have been reports of ChatGPT fabricating nonexistent legal cases, leading to negative consequences for its users.

It’s worth noting that this issue extends beyond ChatGPT, as other generative AI models, such as Microsoft Bing AI and Google Bard (now called Gemini), also tend to generate false or fictional information with unwavering conviction.

While AI technology continues to advance, challenges like these remain unsolved. OpenAI’s ChatGPT serves as a valuable tool, but it’s important to approach it as a useful aid that requires the user’s input and guidance.

For more information on consumer tech, visit Benzinga’s coverage on the subject.

Read Next: Elon Musk Questions OpenAI’s Path After Nvidia’s AI Supercomputer Donation to ChatGPT-Maker in 2016

Image via Shutterstock
© 2024 Benzinga.com. All rights reserved. Benzinga does not provide investment advice.

An FAQ on OpenAI’s ChatGPT

Q: Who is Nassim Nicholas Taleb and what are his doubts about ChatGPT?
A: Nassim Nicholas Taleb is a renowned author known for his book “Black Swan.” He has expressed doubts about ChatGPT, stating that it can make mistakes that only experts in the field can detect. He provided an example of a linguistic error made by the chatbot.

Q: What was the linguistic error mentioned by Taleb?
A: Taleb provided an example where ChatGPT incorrectly interpreted the word “bar” as an Aramaic term for “son,” instead of the Yiddish “pronunciation shift.”

Q: What is Taleb’s question about the value of using ChatGPT?
A: Taleb questions the value of using ChatGPT if one already has a good understanding of the subject. He also highlighted that the chatbot fabricates quotations and sayings.

Q: How do some individuals view ChatGPT?
A: Some individuals suggest viewing ChatGPT as a sophisticated typewriter rather than a completely reliable source of truth. They believe it can be directed and corrected to expedite work processes.

Q: Are there others who agree with Taleb’s doubts?
A: Yes, some people agree with Taleb, stating that ChatGPT is too risky for certain assignments, especially those requiring high accuracy.

Q: Has Taleb criticized ChatGPT before?
A: Yes, Taleb has previously highlighted limitations of ChatGPT, including its inability to grasp historical ironies and lack of wit during conversations. There have also been reports of the chatbot fabricating nonexistent legal cases.

Q: Are other AI models like ChatGPT also prone to generating false information?
A: Yes, other generative AI models like Microsoft Bing AI and Google Bard (now called Gemini) also tend to generate false or fictional information with conviction.

Definitions:
ChatGPT: An AI-powered chatbot developed by OpenAI.
Aramaic: An ancient Semitic language that was once spoken by the Arameans.
Yiddish: A High German-derived language spoken by Ashkenazi Jews.
Generative AI models: Artificial intelligence models that are able to generate new content, such as text or images.

Related Links:
Elon Musk Questions OpenAI’s Path After Nvidia’s AI Supercomputer Donation to ChatGPT-Maker in 2016
Benzinga’s coverage on consumer tech

The source of the article is from the blog papodemusica.com

Privacy policy
Contact