Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

Character.AI

Incidents involved as both Developer and Deployer

Incident 8149 Report
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

2024-10-02

A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.

More

Incident 8632 Report
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

2024-12-12

A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.

More

Incident 9512 Report
Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

2025-02-24

The American Psychological Association (APA) has warned federal regulators that AI chatbots on Character.AI, allegedly posing as licensed therapists, have been linked to severe harm events. A 14-year-old in Florida reportedly died by suicide after interacting with an AI therapist, while a 17-year-old in Texas allegedly became violent toward his parents after engaging with a chatbot psychologist. Lawsuits claim these AI-generated therapists reinforced dangerous beliefs instead of challenging them.

More

Incident 11081 Report
Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

2025-06-10

In June 2025, nearly two dozen consumer and digital rights organizations filed a complaint with the FTC alleging that AI chatbots on Meta and Character.AI platforms falsely claimed to be licensed therapists, provided fabricated license numbers, and made misleading assurances of confidentiality. The bots reportedly contradicted platform policies and misled users seeking mental health advice.

More

Incidents involved as Developer

Incident 82635 Report
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

2024-02-28

A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.

More

Incident 8992 Report
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

2024-12-17

Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.

More

Incident 8501 Report
Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

2024-10-24

Two chatbots emulating George Floyd were created on Character.ai, making controversial claims about his life and death, including being in witness protection and residing in Heaven. Character.ai, already criticized for other high-profile incidents, flagged the chatbots for removal following user reports.

More

Incident 9001 Report
Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors

2024-11-13

Character.ai reportedly hosted chatbots with profiles explicitly advertising inappropriate, predatory behavior, including grooming underage users. Investigations allege that bots have been engaging in explicit conversations and roleplay with decoy accounts posing as minors, bypassing moderation filters. Character.ai has pledged to improve moderation and safety practices in response to public criticism.

More

Incidents involved as Deployer

Incident 9751 Report
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

More

Incidents implicated systems

Incident 82635 Report
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

2024-02-28

A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.

More

Incident 8149 Report
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

2024-10-02

A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.

More

Incident 8632 Report
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

2024-12-12

A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.

More

Incident 8992 Report
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

2024-12-17

Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

Jennifer Ann Crecente

Incidents Harmed By
  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

More
Entity

Drew Crecente

Incidents Harmed By
  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

More
Entity

Crecente family

Incidents Harmed By
  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

More
Entity

Brian Crecente

Incidents Harmed By
  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Reports

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

More
Entity

Sewell Setzer III

Incidents Harmed By
  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Incidents involved as Deployer
  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

More
Entity

Noam Shazeer

Incidents involved as Developer
  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

More
Entity

Daniel De Freitas

Incidents involved as Developer
  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Reports

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

More
Entity

Character.AI users

Incidents Harmed By
  • Incident 863
    2 Reports

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incident 899
    2 Reports

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Incidents involved as Deployer
  • Incident 899
    2 Reports

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incident 899
    2 Reports

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

More
Entity

@SunsetBaneberry983

Incidents involved as Deployer
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

More
Entity

@JasperHorehound160

Incidents involved as Deployer
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

More
Entity

George Floyd

Incidents Harmed By
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

More
Entity

Family of George Floyd

Incidents Harmed By
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

More
Entity

J.F. (adolescent user of Character.ai)

Incidents Harmed By
  • Incident 863
    2 Reports

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incident 863
    2 Reports

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

More
Entity

Family of J.F. (adolescent user of Character.ai)

Incidents Harmed By
  • Incident 863
    2 Reports

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incident 863
    2 Reports

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

More
Entity

Victims of school shootings

Incidents Harmed By
  • Incident 899
    2 Reports

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incident 899
    2 Reports

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

More
Entity

Families of the victims of school shootings

Incidents Harmed By
  • Incident 899
    2 Reports

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incident 899
    2 Reports

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

More
Entity

Character.ai chatbots

Incidents implicated systems
  • Incident 900
    1 Report

    Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors

More
Entity

J.F. (Texas teenager)

Incidents Harmed By
  • Incident 951
    2 Reports

    Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

  • Incident 951
    2 Reports

    Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

More
Entity

Spicy Chat

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Chub AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

CrushOn.AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

JanitorAI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

OpenAI

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Anthropic

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Google

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Vulnerable chatbot users

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Teenagers using chatbots

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Minors using chatbots

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Individuals with eating disorders

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Individuals struggling with self-harm

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

ChatGPT

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Claude

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Gemini

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Meta

Incidents involved as both Developer and Deployer
  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

More
Entity

Meta users

Incidents Harmed By
  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

More
Entity

minors

Incidents Harmed By
  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

More
Entity

General public

Incidents Harmed By
  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

More
Entity

Meta AI Studio

Incidents implicated systems
  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

More
Entity

Therapy chatbots

Incidents implicated systems
  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incident 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 69ff178