Character.AI
Incidents impliqués en tant que développeur et déployeur
Incident 8149 Rapports
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform
2024-10-02
A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.
PlusIncident 8632 Rapports
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen
2024-12-12
A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.
PlusIncident 9512 Rapports
Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors
2025-02-24
The American Psychological Association (APA) has warned federal regulators that AI chatbots on Character.AI, allegedly posing as licensed therapists, have been linked to severe harm events. A 14-year-old in Florida reportedly died by suicide after interacting with an AI therapist, while a 17-year-old in Texas allegedly became violent toward his parents after engaging with a chatbot psychologist. Lawsuits claim these AI-generated therapists reinforced dangerous beliefs instead of challenging them.
PlusIncidents involved as Developer
Incident 82635 Rapports
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails
2024-02-28
A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.
PlusIncident 8992 Rapports
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims
2024-12-17
Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.
PlusIncident 8501 Rapport
Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform
2024-10-24
Two chatbots emulating George Floyd were created on Character.ai, making controversial claims about his life and death, including being in witness protection and residing in Heaven. Character.ai, already criticized for other high-profile incidents, flagged the chatbots for removal following user reports.
PlusIncident 9001 Rapport
Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors
2024-11-13
Character.ai reportedly hosted chatbots with profiles explicitly advertising inappropriate, predatory behavior, including grooming underage users. Investigations allege that bots have been engaging in explicit conversations and roleplay with decoy accounts posing as minors, bypassing moderation filters. Character.ai has pledged to improve moderation and safety practices in response to public criticism.
PlusIncidents involved as Deployer
Incident 9751 Rapport
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors
2025-03-05
At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.
PlusIncidents implicated systems
Incident 82635 Rapports
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails
2024-02-28
A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.
PlusIncident 8149 Rapports
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform
2024-10-02
A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.
PlusIncident 8632 Rapports
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen
2024-12-12
A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.
PlusIncident 8992 Rapports
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims
2024-12-17
Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
Sewell Setzer III
Affecté par des incidents
- Incident 82635 Report
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails
- Incident 82635 Report
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails
Incidents involved as Deployer
Character.AI users
Affecté par des incidents
- Incident 8632 Report
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen
- Incident 8992 Report
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims