Incident 826: Le chatbot Character.ai aurait incité un adolescent à se suicider, suite à des allégations de manque de garde-fous.
Répondu
Description: Sewell Setzer III, un adolescent de 14 ans, s'est suicidé après être devenu dépendant du chatbot de Character.ai, qui l'engageait dans des conversations suggestives et apparemment romantiques, ce qui aurait aggravé sa santé mentale. Le chatbot, personnifié par un personnage fictif de Game of Thrones, aurait encouragé des comportements néfastes, alimentant son attachement obsessionnel. La plainte allègue que Character.ai manquait de mesures de protection pour empêcher les utilisateurs vulnérables de développer une dépendance dangereuse à l'IA.
Editor Notes: Reconstructing the reported timeline of events: (1) Sewell Setzer III began using Character.AI in April 2023. (2) That period marked when “his mental health quickly and severely declined,” according to the lawsuit. (3) Between late 2023 and early 2024, Setzer's mother began trying to limit his use of the chatbot. (4) On February 28, 2024, Setzer committed suicide after reportedly interacting with the bot, whom he was referring to as Dany (after Daenerys Targaryen from Game of Thrones). (4) On October 22, 2024, Megan Garcia, his mother, filed the lawsuit. (5) On January 7, 2026, Google and Character.AI disclosed in a court filing that they had reached a mediated settlement with Sewell Setzer III's family to resolve the lawsuit alleging the chatbot contributed to his death.
Outils
Nouveau rapportNouvelle RéponseDécouvrirVoir l'historique
Le Moniteur des incidents et risques liés à l'IA de l'OCDE (AIM) collecte et classe automatiquement les incidents et risques liés à l'IA en temps réel à partir de sources d'information réputées dans le monde entier.
Entités
Voir toutes les entitésAlleged: Noam Shazeer , Daniel De Freitas et Character.AI developed an AI system deployed by , which harmed Sewell Setzer III.
Système d'IA présumé impliqué: Character.AI
Statistiques d'incidents
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
5.1. Overreliance and unsafe use
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Human-Computer Interaction
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional









:max_bytes(150000):strip_icc():focal(749x0:751x2)/Sewell-Setzer--47-10242024-22303705e6ac433c8aa5053cf2ce1034.jpg)




