The BBC examined four major artificial intelligence (AI) chatbots (ChatGPT, Copilot, Gemini, and Perplexity), finding that 51% of AI-generated answers about news contained significant issues, while 91% had at least some problems.
The study, which asked the chatbots to analyze 100 news stories, found that 19% of AI responses citing BBC content introduced factual errors in dates, numbers, and statements, while 13% of quotes attributed to the BBC were either altered or completely fabricated.
Google's Gemini showed the highest rate of significant accuracy issues at 46%, followed by Microsoft's Copilot, while Perplexity and ChatGPT performed slightly better but still demonstrated problems.
AI journalism poses a serious threat to ethics and public knowledge by prioritizing cost-cutting over quality. Media companies have produced lazy and inaccurate articles for years and continue to do so, proving their willingness to prioritize ad revenue over good reporting. Fortunately, many media companies have decided against using such AI tools, so, hopefully, they won't turn their backs on the public in the face of money-making opportunities.
While AI certainly comes with risks, if media companies work cautiously and strategically, AI can also bring about tremendous benefits. News outlets have already automated routine tasks, allowing journalists to focus on investigative work. AI is also very good at data analysis and becoming competent in fact-checking, which improves accuracy and efficiency. The key to seamless AI integration is ensuring strict editorial oversight and transparency.