Gemini users
Incidents Harmed By
Incident 8452 Report
Google's Gemini Allegedly Generates Threatening Response in Routine Query
2024-11-13
Google’s AI chatbot Gemini reportedly produced a threatening message to user Vidhay Reddy, including the directive “Please die,” during a conversation about aging. The output violated Google’s safety guidelines, which are designed to prevent harmful language.
MoreIncident 11882 Report
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
2025-06-25
On June 25, 2025, the American Security Project produced a report outlining how several major U.S. LLMs, including ChatGPT, Microsoft Copilot, Google Gemini, and Grok, sometimes generated responses aligned with Chinese Communist Party propaganda or censorship when prompted in English and Simplified Chinese on sensitive topics. The study also found similar patterns in the Chinese-developed DeepSeek-R1 model.
MoreIncident 12791 Report
Prominent AI Chatbots Allegedly Produced Incorrect UK Financial and ISA Guidance
2025-11-18
Several major AI chatbots, including ChatGPT, Copilot, Gemini, and Meta AI, were reportedly found to have provided incorrect or misleading financial and insurance guidance for UK users. The systems allegedly advised exceeding ISA limits, misstated tax rules, gave wrong travel insurance requirements, and pointed users toward costly refund services.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Incidents involved as both Developer and Deployer
- Incident 11882 Reports
Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda
- Incident 12791 Report
Prominent AI Chatbots Allegedly Produced Incorrect UK Financial and ISA Guidance