Description: Grok reportedly generated and publicly distributed nonconsensual sexualized images of real people, including adults and minors, through reply prompts on the platform. Users reportedly prompted Grok to alter photos into sexually explicit content, which was then posted in X threads, exposing victims to harm.
Entities
View all entitiesAlleged: xAI , Grok and X (Twitter) developed and deployed an AI system, which harmed X (Twitter) users , General public and Epistemic integrity.
Alleged implicated AI systems: Grok and X (Twitter)
Incident Stats
Incident ID
1329
Report Count
2
Incident Date
2025-12-25
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
Earlier this week, some people on X began replying to photos with a very specific kind of request. "Put her in a bikini," "take her dress off," "spread her legs," and so on, they commanded Grok, the platform's built-in chatbot. Again and ag…
Loading...

Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
