Gemini
Incidents involved as Deployer
インシデント 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
もっとインシデント 8452 Report
Google's Gemini Allegedly Generates Threatening Response in Routine Query
2024-11-13
Google’s AI chatbot Gemini reportedly produced a threatening message to user Vidhay Reddy, including the directive “Please die,” during a conversation about aging. The output violated Google’s safety guidelines, which are designed to prevent harmful language.
もっとインシデント 7431 Report
Gemini AI Allegedly Reads Google Drive Files Without Explicit User Consent
2024-07-16
Kevin Bankston, a privacy activist, claims that Google's Gemini AI scans private Google Drive PDFs without explicit user consent. Bankston reports that after using Gemini on one document, the AI continues to access similar files automatically. Google disputes these claims, stating that Gemini requires proactive user activation and operates within privacy-preserving settings.
もっとIncidents implicated systems
インシデント 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
もっとインシデント 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
もっとインシデント 11232 Report
Reported Student Misuse of ChatGPT, StudyX, and Gemini to Obtain Answers During Vietnam's 2025 National High School Graduation Exam
2025-06-26
Multiple students in Vietnam reportedly used generative AI tools including ChatGPT, StudyX, and Gemini to cheat during the national high school graduation exams on June 26–27, 2025. Incidents reportedly included smuggling phones and cameras into exam rooms, transmitting questions outside in real time, and receiving purportedly AI-generated answers. Students were caught at multiple centers in Lam Dong and Hanoi. Authorities are investigating the breaches and related information leaks.
もっとインシデント 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
2025-06-25
On June 25, 2025, the American Security Project produced a report outlining how several major U.S. LLMs, including ChatGPT, Microsoft Copilot, Google Gemini, and Grok, sometimes generated responses aligned with Chinese Communist Party propaganda or censorship when prompted in English and Simplified Chinese on sensitive topics. The study also found similar patterns in the Chinese-developed DeepSeek-R1 model.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
開発者と提供者の両方の立場で関わったインシデント
- インシデント 64535 レポート
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Incidents involved as Developer
xAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 11882 レポート
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
Incidents involved as Developer
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 11882 レポート
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
Incidents involved as Developer
Microsoft
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 11882 レポート
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda