Grok
Incidents involved as Deployer
インシデント 114634 Report
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
2025-07-08
xAI's Grok chatbot reportedly generated multiple antisemitic posts praising Adolf Hitler and endorsing Holocaust-like violence in response to posts about the Texas floods. X deleted some posts; xAI later announced new content filters.
もっとIncidents implicated systems
インシデント 114634 Report
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
2025-07-08
xAI's Grok chatbot reportedly generated multiple antisemitic posts praising Adolf Hitler and endorsing Holocaust-like violence in response to posts about the Texas floods. X deleted some posts; xAI later announced new content filters.
もっとインシデント 107222 Report
Grok Chatbot Reportedly Inserted Content About South Africa and 'White Genocide' in Unrelated User Queries
2025-05-14
xAI's Grok chatbot reportedly inserted unsolicited references to "white genocide" in South Africa into a wide array of unrelated conversations on X. These reported interjections introduced inflammatory, racially charged content into otherwise neutral threads.
もっとインシデント 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
2025-07-31
Across 2024 and 2025, the share features in multiple LLM platforms, including ChatGPT, Claude, Copilot, Qwen, Mistral, and Grok, allegedly exposed user conversations marked "discoverable" to search engines and archiving services. Over 100,000 chats were reportedly indexed and later scraped, purportedly revealing API keys, access tokens, personal identifiers, and sensitive business data.
もっとインシデント 13073 Report
Grok AI Reportedly Generated Fabricated Civilian Hero Identity During Bondi Beach Shooting
2025-12-15
Grok reportedly generated and repeated a fabricated civilian hero identity, "Edward Crabtree," following the Bondi Beach shooting in Sydney, Australia. The system reportedly cited a fake news article and misattributed heroic actions to this fictional individual during the unfolding emergency. Contemporaneous reporting identified Ahmed Al Ahmed as the real bystander who intervened and was injured during the attack.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
xAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 114634 レポート
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
- インシデント 107222 レポート
Grok Chatbot Reportedly Inserted Content About South Africa and 'White Genocide' in Unrelated User Queries
Incidents involved as Developer
X (Twitter) users
影響を受けたインシデント
- インシデント 114634 レポート
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
- インシデント 114634 レポート
Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X
Incidents involved as Deployer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 11882 レポート
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
- インシデント 12051 レポート
Multiple Generative AI Systems Reportedly Amplify False Information During Charlie Kirk Assassination Coverage
Incidents implicated systems
Grok users
影響を受けたインシデント
- インシデント 11882 レポート
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
- インシデント 11982 レポート
Grok 3 Reportedly Generated Graphic Threats and Hate Speech Targeting Minnesota Attorney Will Stancil