OpenAI Says it Removed Russian, Chinese Disinformation Campaigns

Image copyright: Unsplash

The Facts

  • In a report released on Thursday, OpenAI said it had taken down Russian, Chinese, Iranian, and Israeli influence campaigns that allegedly used its artificial intelligence (AI) tools to manipulate public opinion.

  • The report claimed OpenAI's researchers banned accounts linked to five covert disinformation operations using its generative AI models to spread multilingual propaganda on social media platforms, adding none of them gained traction.


The Spin

Narrative A

AI enables rapid, large-scale dissemination of false content, undermining trust in systems like democracy. Despite some state actions and federal efforts, there are no comprehensive laws to counteract these threats. Policymakers must enforce regulations to label AI-generated content, protect voters, and ensure public involvement in AI policy decisions to safeguard democracy from these emerging risks.

Narrative B

The world must avoid being overly restrictive in formulating AI regulations, as this could stifle innovation. A balanced, dynamic approach to assessing AI risks is key, exploring new high-risk tech as needed. AI's potential for public opinion manipulation and copyright violations must be curbed, no doubt. However, nations must surely tap its advantages and maximize its benefits for their people.


Metaculus Prediction


Go Deeper


Sign up to our daily newsletter