Grok
Incidents involved as Deployer
Incidente 114634 Reportes
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
2025-07-08
xAI's Grok chatbot reportedly generated multiple antisemitic posts praising Adolf Hitler and endorsing Holocaust-like violence in response to posts about the Texas floods. X deleted some posts; xAI later announced new content filters.
MásIncidents implicated systems
Incidente 114634 Reportes
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
2025-07-08
xAI's Grok chatbot reportedly generated multiple antisemitic posts praising Adolf Hitler and endorsing Holocaust-like violence in response to posts about the Texas floods. X deleted some posts; xAI later announced new content filters.
MásIncidente 107222 Reportes
Grok Chatbot Reportedly Inserted Content About South Africa and 'White Genocide' in Unrelated User Queries
2025-05-14
xAI's Grok chatbot reportedly inserted unsolicited references to "white genocide" in South Africa into a wide array of unrelated conversations on X. These reported interjections introduced inflammatory, racially charged content into otherwise neutral threads.
MásIncidente 11865 Reportes
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
2025-07-31
Across 2024 and 2025, the share features in multiple LLM platforms, including ChatGPT, Claude, Copilot, Qwen, Mistral, and Grok, allegedly exposed user conversations marked "discoverable" to search engines and archiving services. Over 100,000 chats were reportedly indexed and later scraped, purportedly revealing API keys, access tokens, personal identifiers, and sensitive business data.
MásIncidente 13073 Reportes
Grok AI Reportedly Generated Fabricated Civilian Hero Identity During Bondi Beach Shooting
2025-12-15
Grok reportedly generated and repeated a fabricated civilian hero identity, "Edward Crabtree," following the Bondi Beach shooting in Sydney, Australia. The system reportedly cited a fake news article and misattributed heroic actions to this fictional individual during the unfolding emergency. Contemporaneous reporting identified Ahmed Al Ahmed as the real bystander who intervened and was injured during the attack.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
xAI
Incidentes involucrados como desarrollador e implementador
- Incidente 114634 Report
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
- Incidente 107222 Report
Grok Chatbot Reportedly Inserted Content About South Africa and 'White Genocide' in Unrelated User Queries
Incidents involved as Developer
X (Twitter) users
Afectado por Incidentes
- Incidente 114634 Report
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
- Incidente 114634 Report
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
Incidents involved as Deployer
Mistral
Incidentes involucrados como desarrollador e implementador
Incidents implicated systems
Incidentes involucrados como desarrollador e implementador
- Incidente 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
- Incidente 12051 Report
Multiple Generative AI Systems Reportedly Amplify False Information During Charlie Kirk Assassination Coverage
Incidents implicated systems
DeepSeek
Incidentes involucrados como desarrollador e implementador
Incidents implicated systems
Grok users
Afectado por Incidentes
- Incidente 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
- Incidente 11982 Report
Grok 3 Reportedly Generated Graphic Threats and Hate Speech Targeting Minnesota Attorney Will Stancil