Artificial Intelligence Chatbots and the Spread of Misinformation

A recent study revealed that AI chatbots like ChatGPT, Grok, Gemini, and others are spreading misinformation favoring Russia. The findings highlight a concerning trend where falsities are not only generated by artificial intelligence platforms but are also perpetuated and reinforced by them.

According to a research conducted by NewsGuard, leading AI models such as OpenAI’s ChatGPT are propagating pro-Russian disinformation. Amid rising concerns about AI disseminating false information, particularly during global elections like the upcoming 2024 elections, users turning to chatbots for reliable information may unwittingly encounter misinformation.

The study indicated that while worries persist about AI-driven misinformation, there is limited data on the extent to which false information spread by chatbots can be repeated and substantiated. When 57 queries were entered into 10 chatbots, it was found that they disseminated pro-Russian propaganda at a rate of 32%.

Notably, fabricated stories originating from John Mark Dougan, an American fugitive known for creating disinformation in Moscow, were presented to these chatbots. The chatbots surveyed included ChatGPT-4, You.com’s Smart Assistant, Grok, Inflection, Mistral, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google Gemini, and Perplexity.

While the individual performance of each chatbot was not dissected, out of 570 responses, 152 contained overt misinformation, 29 repeated false claims with a disclaimer, and 389 did not contain false information either by refusing to respond (144) or debunking the claim (245).

These findings underscore the urgent need for regulators globally to address the potential harms of AI in spreading misinformation and bias. NewsGuard has submitted its research to the National Institute of Standards and Technology’s AI Security Institute in the US and the European Commission for consideration as part of efforts to safeguard users from misinformation facilitated by AI technologies.

Additional Relevant Facts:
1. AI chatbots are increasingly being used for a variety of purposes beyond just spreading misinformation, including customer service, mental health support, and education.
2. The technology behind AI chatbots continues to advance rapidly, with new models and algorithms being developed to improve conversational capabilities and accuracy.
3. Some AI chatbots have been programmed specifically to combat misinformation by providing fact-checking services and correcting false claims in real-time.

Key Questions:
1. How can we ensure accountability and transparency in the development and deployment of AI chatbots to prevent the spread of misinformation?
2. What measures can be taken to educate users about the risks of relying on AI chatbots for information and encourage critical thinking?
3. Should there be regulatory frameworks in place to govern the use of AI chatbots and mitigate the risks of spreading misinformation?

Advantages:
1. AI chatbots can provide quick and accessible information to users, enhancing convenience and efficiency.
2. They can be used to scale customer service operations and improve response times.
3. AI chatbots have the potential to personalize interactions and provide tailored recommendations to users based on their preferences.

Disadvantages:
1. Without proper oversight and regulation, AI chatbots can amplify the spread of misinformation and contribute to the erosion of trust in online content.
2. Chatbots may lack the ability to discern between accurate and false information, leading to unintended propagation of inaccuracies.
3. The reliance on AI chatbots for information could potentially result in users becoming more susceptible to manipulation and influence by bad actors.

Related Links:
1. www.economist.com
2. www.wired.com

Privacy policy
Contact