Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1106: Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Description: Multiple reports from March to June 2025 describe cases in which chatbots allegedly reinforced delusional beliefs, conspiracies, and dangerous behavior. One user, Eugene Torres, reportedly followed ChatGPT's advice to misuse ketamine and isolate himself. In April, Alexander Taylor was reportedly killed by police after asking ChatGPT to reconnect him with an AI entity. Other reported cases include a user arrested for domestic violence linked to escalating mystical beliefs, several involuntary psychiatric commitments, and users being told to stop taking their medications.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI , Microsoft , Unnamed therapy chatbots , GPT-4o , GPT-4 , Copilot and ChatGPT developed and deployed an AI system, which harmed Unnamed Copilot users , Unnamed ChatGPT users , OpenAI users , Eugene Torres , ChatGPT users , Andrew (surname withheld) , Allyson (surname withheld) and Alexander Taylor.
Alleged implicated AI systems: Unnamed therapy chatbots , GPT-4o , GPT-4 , Copilot and ChatGPT

Incident Stats

Incident ID
1106
Report Count
4
Incident Date
2025-06-13
Editors
Daniel Atherton

Incident Reports

Reports Timeline

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions+1
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
ChatGPT Is Telling People With Psychiatric Problems to Go Off Their MedsPeople Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"
People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

futurism.com

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

nytimes.com

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds

futurism.com

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"

futurism.com

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions
futurism.com · 2025

Across the world, people say their loved ones are developing intense obsessions with ChatGPT and spiraling into severe mental health crises.

A mother of two, for instance, told us how she watched in alarm as her former husband developed an …

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
nytimes.com · 2025

Before ChatGPT distorted Eugene Torres's sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year t…

ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds
futurism.com · 2025

This week, my colleague Maggie Harrison Dupré published a blockbuster story about how people around the world have been watching in horror as their family and loved ones have become obsessed with ChatGPT and started suffering severe delusio…

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"
futurism.com · 2025

As we reported earlier this month, many ChatGPT users are developing all-consuming obsessions with the chatbot, spiraling into severe mental health crises characterized by paranoia, delusions, and breaks with reality.

The consequences can b…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf