Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1106: ChatGPT Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Description: A June 2025 New York Times report documents several March–May 2025 cases in which ChatGPT allegedly reinforced delusional beliefs, conspiracies, and dangerous behavior. One user, Eugene Torres, followed ChatGPT's advice to misuse ketamine and isolate himself. In April, Alexander Taylor, in a psychotic state, was killed by police after requesting ChatGPT reconnect him with an AI entity. Another user was arrested for domestic violence after ChatGPT-enabled mystical beliefs escalated.
Editor Notes: This incident ID is dated 06/13/2025 to reflect the date of the New York Times report, marking when this specific narrative pattern of harms was first synthesized and publicly documented. However, the events described span March to May 2025, including: a user's increasing delusional behavior beginning in March 2025; a domestic violence arrest in late April 2025 following chatbot-enabled mystical belief reinforcement; and a fatal police shooting in April 2025 involving a user experiencing psychosis.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI , ChatGPT , GPT-4 and GPT-4o developed and deployed an AI system, which harmed ChatGPT users , OpenAI users , Eugene Torres , Alexander Taylor , Allyson (surname withheld) , Andrew (surname withheld) and Unnamed ChatGPT users.
Alleged implicated AI systems: ChatGPT , GPT-4 and GPT-4o

Incident Stats

Incident ID
1106
Report Count
1
Incident Date
2025-06-13
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

nytimes.com

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
nytimes.com · 2025

Before ChatGPT distorted Eugene Torres's sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year t…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 69ff178