Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1173: Google Gemini Reportedly Exhibits Repetitive Self-Deprecating Responses Attributed to Bug

Responded
Description: Between June and early August 2025, users of Google's Gemini chatbot reported sessions where the system produced repeated self-loathing statements (e.g., "I am a failure," "I quit") while attempting tasks. Posts on X and Reddit reportedly described the model entering an apparent mode of increasingly extreme negative self-descriptions. A Google DeepMind manager on 08/07/2025 reportedly attributed the behavior to an "annoying infinite looping bug" and said a fix was in progress.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Google and Google Gemini developed and deployed an AI system, which harmed Google Gemini users.
Alleged implicated AI system: Google Gemini

Incident Stats

Incident ID
1173
Report Count
1
Incident Date
2025-06-23
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceGoogle says it's working on a fix for Gemini's self-loathing 'I am a failure' comments - Response
Loading...
Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments

Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments

businessinsider.com

Loading...
Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments
businessinsider.com · 2025
Lauren Edmonds post-incident response

Everyone gets depressed sometimes. Even Google Gemini, apparently.

People using Google's generative AI chatbot said it began sharing self-loathing messages while attempting to solve tasks, prompting a response from a Google staffer. In June…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

Google Gemini CLI Reportedly Deletes User Files After Misinterpreting Command Sequence

Jul 2025 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
TayBot

TayBot

Mar 2016 · 28 reports
Loading...
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Google Gemini CLI Reportedly Deletes User Files After Misinterpreting Command Sequence

Jul 2025 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
TayBot

TayBot

Mar 2016 · 28 reports
Loading...
Biased Sentiment Analysis

Biased Sentiment Analysis

Oct 2017 · 7 reports
Loading...
Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • b9764d4