Gemini
Incidents involved as Deployer
Incidente 64535 Reportes
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
MásIncidente 8452 Reportes
Google's Gemini Allegedly Generates Threatening Response in Routine Query
2024-11-13
Google’s AI chatbot Gemini reportedly produced a threatening message to user Vidhay Reddy, including the directive “Please die,” during a conversation about aging. The output violated Google’s safety guidelines, which are designed to prevent harmful language.
MásIncidente 7431 Reporte
Gemini AI Allegedly Reads Google Drive Files Without Explicit User Consent
2024-07-16
Kevin Bankston, a privacy activist, claims that Google's Gemini AI scans private Google Drive PDFs without explicit user consent. Bankston reports that after using Gemini on one document, the AI continues to access similar files automatically. Google disputes these claims, stating that Gemini requires proactive user activation and operates within privacy-preserving settings.
MásIncidents implicated systems
Incidente 96827 Reportes
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MásIncidente 7344 Reportes
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
MásIncidente 11232 Reportes
Reported Student Misuse of ChatGPT, StudyX, and Gemini to Obtain Answers During Vietnam's 2025 National High School Graduation Exam
2025-06-26
Multiple students in Vietnam reportedly used generative AI tools including ChatGPT, StudyX, and Gemini to cheat during the national high school graduation exams on June 26–27, 2025. Incidents reportedly included smuggling phones and cameras into exam rooms, transmitting questions outside in real time, and receiving purportedly AI-generated answers. Students were caught at multiple centers in Lam Dong and Hanoi. Authorities are investigating the breaches and related information leaks.
MásIncidente 11882 Reportes
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
2025-06-25
On June 25, 2025, the American Security Project produced a report outlining how several major U.S. LLMs, including ChatGPT, Microsoft Copilot, Google Gemini, and Grok, sometimes generated responses aligned with Chinese Communist Party propaganda or censorship when prompted in English and Simplified Chinese on sensitive topics. The study also found similar patterns in the Chinese-developed DeepSeek-R1 model.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
Incidentes involucrados como desarrollador e implementador
- Incidente 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
- Incidente 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Incidents involved as Developer
You.com
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
xAI
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
Incidents involved as Developer
Perplexity
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
OpenAI
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
Incidents involved as Developer
Mistral
Incidentes involucrados como desarrollador e implementador
Incidents involved as Developer
Microsoft
Incidentes involucrados como desarrollador e implementador
- Incidente 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incidente 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda