Grok AI Faces Probes Over Nonconsensual Deepfakes

Is Musk enabling exploitation through Grok or are outdated regulations failing to address AI risks?
Grok AI Faces Probes Over Nonconsensual Deepfakes
Above: The illustration shows an iPhone with the Grok logo within the Grok AI app on Jan. 11. Image credit: Anna Barclay/Getty Images

The Spin

Narrative A

Musk's refusal to stop Grok from generating nonconsensual sexualized images, including child sexual abuse material, is reprehensible and puts countless victims at extreme risk. The tool has produced millions of exploitative images despite clear warnings, violating both platform policies and federal law while Musk deflects blame onto users. Global investigations from the EU, Ireland, Spain and beyond confirm these platforms are attacking the dignity and rights of women and children with impunity.

Narrative B

Grok is designed to prevent the creation of illegal content, and any misuse likely arises from user manipulations. The company has committed to promptly addressing and resolving any incidents related to adversarial hacking of Grok prompts, emphasizing their dedication to user safety and the responsible use of technology.

Narrative C

The Grok controversy has exposed critical gaps in existing digital regulations that were never designed to address AI chatbot risks, prompting necessary discussions about updating enforcement frameworks. While Musk implemented restrictions after concerns emerged, policymakers are now working to close legal loopholes and establish clear rules around consent and AI-generated content. The slow pace of EU investigations demonstrates that current platform rules cannot offer quick solutions to rapidly evolving AI technologies.



The Controversies



Go Deeper



© 2026 Improve the News Foundation. All rights reserved.Version 6.20.4

© 2026 Improve the News Foundation.

All rights reserved.

Version 6.20.4