AI chatbots are wrong about news 45% of the time, study finds
A new international study coordinated by the European Broadcasting Union (EBU) and led by the BBC shows that AI assistants distort news content nearly half of the time. The research included 22 public service broadcasters in 18 countries and 14 languages.
In the study, more than 3,000 AI responses from Chat GPT, Copilot, Gemini and Perplexity were analyzed by professional journalists. The results showed that 45% of all responses contained at least one serious error, 31% had inadequate or misleading source citations, and 20% contained major factual errors such as fabricated details or outdated information.
Google’s Gemini was the worst performer with problems in 76% of its responses — mainly due to a lack of source attribution.
“This research shows conclusively that these shortcomings are not isolated incidents,” EBU Media Director and Deputy Director-General Jean Philip De Tender said in a statement. “They are systematic, transnational and multilingual, and we believe this jeopardizes public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can discourage democratic participation.”
As part of the project, the EBU and the BBC have launched a “News Integrity in AI Assistants Toolkit” to help AI developers and users improve the quality of responses and increase media literacy.
The organizations also called on the EU and national authorities to apply existing rules on information integrity, digital services and media pluralism, and to introduce an ongoing independent review of AI assistants.
Open AI, Microsoft, Google and Perplexity AI have not yet commented on the study.AI chatbots are wrong about news 45% of the time, study finds – ComputerworldRead More