Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

Fri, 24 Oct 2025 23:01:48 +1100

Andrew Pam <xanni [at] glasswings.com.au>

Andrew Pam
<https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content>

"New research coordinated by the European Broadcasting Union (EBU) and led by
the BBC has found that AI assistants – already a daily information gateway
for millions of people – routinely misrepresent news content no matter which
language, territory, or AI platform is tested.

The intensive international study of unprecedented scope and scale was launched
at the EBU News Assembly, in Naples. Involving 22 public service media (PSM)
organizations in 18 countries working in 14 languages, it identified multiple
systemic issues across four leading AI tools.

Professional journalists from participating PSM evaluated more than 3,000
responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria,
including accuracy, sourcing, distinguishing opinion from fact, and providing
context.

Key findings:

* 45% of all AI answers had at least one significant issue.
* 31% of responses showed serious sourcing problems – missing, misleading, or
incorrect attributions.
* 20% contained major accuracy issues, including hallucinated details and
outdated information.
* Gemini performed worst with significant issues in 76% of responses, more than
double the other assistants, largely due to its poor sourcing performance.
* Comparison between the BBC’s results earlier this year and this study show
some improvements but still high levels of errors."

Via Susan ****

Cheers,
       *** Xanni ***
--
mailto:xanni@xanadu.net               Andrew Pam
http://xanadu.com.au/                 Chief Scientist, Xanadu
https://glasswings.com.au/            Partner, Glass Wings
https://sericyb.com.au/               Manager, Serious Cybernetics

Comment via email

Home E-Mail Sponsors Index Search About Us