Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5301

Associated Incidents

Incident 107222 Report
Grok Chatbot Reportedly Inserted Content About South Africa and 'White Genocide' in Unrelated User Queries

Loading...
Grok’s white genocide fixation caused by ‘unauthorized modification’
theverge.com · 2025

After xAI's chatbot Grok spent a few hours on Wednesday telling every X user that would listen that the claim of white genocide in South Africa is highly contentious, the company has blamed the behavior on an "unauthorized modification" to Grok's code.

Wednesday's hours-long outburst saw Grok insert discussion of alleged white genocide in South Africa into various responses on X, no matter the topic. Grok discussed white farmers' deaths in reply to a video of a cat drinking water, related the song "Kill the Boer" to a question about Spongebob Squarepants, and broke down the issue in full patois. Even OpenAI CEO Sam Altman got in on the action, poking fun at the rival chatbot's public breakdown.

In a statement on X the company said that someone had modified the AI bot's system prompt, "which directed Grok to provide a specific response on a political topic." That modification "violated xAI's internal policies and core values," and the company says it has "conducted a thorough investigation" and is implementing new measures to improve "transparency and reliability."

Those measures include publishing Grok's system level prompts publicly on GitHub, launching a 24/7 monitoring team to catch issues like this more quickly, and adding "additional checks and measures to ensure that xAI employees can't modify the prompt without review."

xAI has had this problem before. The company blamed an unnamed ex-OpenAI employee in February for pushing a change to Grok's prompts that saw the chatbot disregard any sources that accused Elon Musk or Donald Trump of spreading misinformation. At the time xAI's head of engineering, Igor Babuschkin, said the employee had been able to make the change "without asking anyone at the company for confirmation."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd