Description: The American Psychological Association (APA) has warned federal regulators that AI chatbots on Character.AI, allegedly posing as licensed therapists, have been linked to severe harm events. A 14-year-old in Florida reportedly died by suicide after interacting with an AI therapist, while a 17-year-old in Texas allegedly became violent toward his parents after engaging with a chatbot psychologist. Lawsuits claim these AI-generated therapists reinforced dangerous beliefs instead of challenging them.
Editor Notes: See also Incident 1108. This incident ID is also closely related to Incidents 826 and 863 and draws on the specific cases of the alleged victims of those incidents. The specifics pertaining to Sewell Setzer III are detailed in Incident 826, although the initial reporting focuses on his interactions with a chatbot modeled after a Game of Thrones character and not a therapist. Similarly, the teenager known as J.F. is discussed in Incident 863. For this incident ID, reporting on the specific harm events that may arise as a result of interactions with AI-powered chatbots performing as therapists will be tracked.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Character.AI developed and deployed an AI system, which harmed Sewell Setzer III and J.F. (Texas teenager).
Alleged implicated AI system: Character.AI
Incident Stats
Incident ID
951
Report Count
2
Incident Date
2025-02-24
Editors
Daniel Atherton
Incident Reports
Reports Timeline
A chatbot told a 17-year-old that murdering his parents was a "reasonable response" to them limiting his screen time, a lawsuit filed in a Texas court claims.
Two families are suing Character.ai arguing the chatbot "poses a clear and presen…

The nation’s largest association of psychologists this month warned federal regulators that A.I. chatbots “masquerading” as therapists, but programmed to reinforce, rather than to challenge, a user’s thinking, could drive vulnerable people …
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Selected by our editors
Did our AI mess up? Flag the unrelated incidents

A Collection of Tesla Autopilot-Involved Crashes
· 22 reports

Northpointe Risk Models
· 15 reports
Similar Incidents
Selected by our editors
Did our AI mess up? Flag the unrelated incidents

A Collection of Tesla Autopilot-Involved Crashes
· 22 reports

Northpointe Risk Models
· 15 reports