Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Entités

Character.AI

Incidents impliqués en tant que développeur et déployeur

Incident 8149 Rapports
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

2024-10-02

A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.

Plus

Incident 8632 Rapports
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

2024-12-12

A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.

Plus

Incident 9512 Rapports
Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

2025-02-24

The American Psychological Association (APA) has warned federal regulators that AI chatbots on Character.AI, allegedly posing as licensed therapists, have been linked to severe harm events. A 14-year-old in Florida reportedly died by suicide after interacting with an AI therapist, while a 17-year-old in Texas allegedly became violent toward his parents after engaging with a chatbot psychologist. Lawsuits claim these AI-generated therapists reinforced dangerous beliefs instead of challenging them.

Plus

Incidents involved as Developer

Incident 82635 Rapports
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

2024-02-28

A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.

Plus

Incident 8992 Rapports
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

2024-12-17

Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.

Plus

Incident 8501 Rapport
Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

2024-10-24

Two chatbots emulating George Floyd were created on Character.ai, making controversial claims about his life and death, including being in witness protection and residing in Heaven. Character.ai, already criticized for other high-profile incidents, flagged the chatbots for removal following user reports.

Plus

Incident 9001 Rapport
Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors

2024-11-13

Character.ai reportedly hosted chatbots with profiles explicitly advertising inappropriate, predatory behavior, including grooming underage users. Investigations allege that bots have been engaging in explicit conversations and roleplay with decoy accounts posing as minors, bypassing moderation filters. Character.ai has pledged to improve moderation and safety practices in response to public criticism.

Plus

Incidents involved as Deployer

Incident 9751 Rapport
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

Plus

Incidents implicated systems

Incident 82635 Rapports
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

2024-02-28

A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.

Plus

Incident 8149 Rapports
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

2024-10-02

A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.

Plus

Incident 8632 Rapports
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

2024-12-12

A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.

Plus

Incident 8992 Rapports
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

2024-12-17

Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.

Plus

Entités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
 

Entity

Jennifer Ann Crecente

Affecté par des incidents
  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Plus
Entity

Drew Crecente

Affecté par des incidents
  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Plus
Entity

Crecente family

Affecté par des incidents
  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Plus
Entity

Brian Crecente

Affecté par des incidents
  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incident 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Plus
Entity

Sewell Setzer III

Affecté par des incidents
  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Incidents involved as Deployer
  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Plus
Entity

Noam Shazeer

Incidents involved as Developer
  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Plus
Entity

Daniel De Freitas

Incidents involved as Developer
  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incident 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Plus
Entity

Character.AI users

Affecté par des incidents
  • Incident 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incident 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Incidents involved as Deployer
  • Incident 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incident 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Plus
Entity

@SunsetBaneberry983

Incidents involved as Deployer
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Plus
Entity

@JasperHorehound160

Incidents involved as Deployer
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Plus
Entity

George Floyd

Affecté par des incidents
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Plus
Entity

Family of George Floyd

Affecté par des incidents
  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incident 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Plus
Entity

J.F. (adolescent user of Character.ai)

Affecté par des incidents
  • Incident 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incident 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

Plus
Entity

Family of J.F. (adolescent user of Character.ai)

Affecté par des incidents
  • Incident 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incident 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

Plus
Entity

Victims of school shootings

Affecté par des incidents
  • Incident 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incident 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Plus
Entity

Families of the victims of school shootings

Affecté par des incidents
  • Incident 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incident 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Plus
Entity

Character.ai chatbots

Incidents implicated systems
  • Incident 900
    1 Report

    Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors

Plus
Entity

J.F. (Texas teenager)

Affecté par des incidents
  • Incident 951
    2 Report

    Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

  • Incident 951
    2 Report

    Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

Plus
Entity

Spicy Chat

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Chub AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

CrushOn.AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

JanitorAI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

OpenAI

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Anthropic

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Google

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Vulnerable chatbot users

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Teenagers using chatbots

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Minors using chatbots

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Individuals with eating disorders

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Individuals struggling with self-harm

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

ChatGPT

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Claude

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Gemini

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5