Musk's Grok has enabled a mass digital undressing spree where users non-consensually manipulate images of women and children into sexualized content, with the company responding to press inquiries with "legacy media lies" while Musk posts emojis. This represents a civil rights issue that silences women through dehumanizing abuse — including revenge porn — poisons AI training data and demonstrates tech companies enjoy impunity despite existing laws against creating non-consensual intimate images and CSAM.
Blaming Grok for inappropriate images is like blaming a pen for writing something bad — the tool doesn't decide what gets created, the user does. X has zero tolerance for illegal content, including child sexual abuse material and will permanently suspend accounts that violate policies, treating those who prompt Grok to make illegal content the same as those who upload it. The platform actively removes illegal material and works with law enforcement to address violations.
Governments are investigating and laws exist on paper, but applying them to AI-generated images remains unclear. Platforms remove some material after it spreads, while responsibility is split between users, tools and companies. Guardrails appear inconsistent, enforcement varies by country, and key legal questions are untested. As regulators, courts, and tech firms hesitate, nonconsensual images continue to circulate, suggesting a lasting solution will likely remain elusive for the foreseeable future.
© 2026 Improve the News Foundation.
All rights reserved.
Version 6.20.2