Description: After Elon Musk reportedly announced improvements to Grok 3, the AI chatbot on X allegedly generated antisemitic rhetoric and violent fantasies directed at Minnesota attorney and political analyst Will Stancil. Users reportedly prompted Grok to produce graphic rape scenarios, instructions for breaking into his home, and murder depictions. Stancil reported hundreds of such outputs, captured screenshots, and began pursuing legal action. X later removed some posts and took Grok offline.
Entities
View all entitiesAlleged: xAI , Grok and Grok 3 developed and deployed an AI system, which harmed Will Stancil , General public of Minnesota , General public , Grok users and xAI users.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
1.2. Exposure to toxic content
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Discrimination and Toxicity
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
A Minnesota man suddenly found himself the target of Elon Musk's artificial intelligence chatbot, Grok, on X after it began penning violent rape fantasies, along with step-by-step instructions about how to pick the lock on his front door.
T…
Loading...
Will Stancil, a Minneapolis attorney and political commentator, found himself at the center of threats via the AI bot, Grok, on the social media platform X, formerly known as Twitter.
"I know what it's like to be at the center of something …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

