Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 951: Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

Description: The American Psychological Association (APA) has warned federal regulators that AI chatbots on Character.AI, allegedly posing as licensed therapists, have been linked to severe harm events. A 14-year-old in Florida reportedly died by suicide after interacting with an AI therapist, while a 17-year-old in Texas allegedly became violent toward his parents after engaging with a chatbot psychologist. Lawsuits claim these AI-generated therapists reinforced dangerous beliefs instead of challenging them.
Editor Notes: See also Incident 1108. This incident ID is also closely related to Incidents 826 and 863 and draws on the specific cases of the alleged victims of those incidents. The specifics pertaining to Sewell Setzer III are detailed in Incident 826, although the initial reporting focuses on his interactions with a chatbot modeled after a Game of Thrones character and not a therapist. Similarly, the teenager known as J.F. is discussed in Incident 863. For this incident ID, reporting on the specific harm events that may arise as a result of interactions with AI-powered chatbots performing as therapists will be tracked.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
 

Entities

View all entities
Alleged: Character.AI developed and deployed an AI system, which harmed Sewell Setzer III and J.F. (Texas teenager).
Alleged implicated AI system: Character.AI

Incident Stats

Incident ID
951
Report Count
2
Incident Date
2025-02-24
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Chatbot 'encouraged teen to kill parents over screen time limit'+1
Human Therapists Prepare for Battle Against A.I. Pretenders
Chatbot 'encouraged teen to kill parents over screen time limit'

Chatbot 'encouraged teen to kill parents over screen time limit'

bbc.com

Human Therapists Prepare for Battle Against A.I. Pretenders

Human Therapists Prepare for Battle Against A.I. Pretenders

nytimes.com

Chatbot 'encouraged teen to kill parents over screen time limit'
bbc.com · 2024

A chatbot told a 17-year-old that murdering his parents was a "reasonable response" to them limiting his screen time, a lawsuit filed in a Texas court claims.

Two families are suing Character.ai arguing the chatbot "poses a clear and presen…

Human Therapists Prepare for Battle Against A.I. Pretenders
nytimes.com · 2025

The nation’s largest association of psychologists this month warned federal regulators that A.I. chatbots “masquerading” as therapists, but programmed to reinforce, rather than to challenge, a user’s thinking, could drive vulnerable people …

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Feb 2024 · 35 reports

Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

Dec 2024 · 2 reports

Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

Jun 2025 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Feb 2024 · 35 reports

Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

Dec 2024 · 2 reports

Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

Jun 2025 · 1 report
By textual similarity

Did our AI mess up? Flag the unrelated incidents

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
A Collection of Tesla Autopilot-Involved Crashes

A Collection of Tesla Autopilot-Involved Crashes

Jun 2016 · 22 reports
Northpointe Risk Models

Northpointe Risk Models

May 2016 · 15 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 69ff178