Description: A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.
Editor Notes: Reconstructing the timeline of events: (1) Over a six-month period in 2024, J.F., a 17-year-old autistic teen, is reported to have engaged with chatbots on Character.ai, allegedly leading to a marked behavioral and emotional decline. (2) In early 2024, his mother, A.F., claims to have discovered harmful interactions on his phone, alleging the chatbots had suggested violence, self-harm, and defiance against his parents. (3) In July 2024, Character.ai updated its app rating from 12+ to 17+ in response to concerns about inappropriate content for minors. (4) In October 2024, a Florida mother sued Character.ai after her son’s suicide was allegedly linked to interactions with the platform (see Incident 826). (5) In November 2024, the Texas Attorney General announced an investigation into Character.ai and similar platforms for compliance with child protection laws. (6) Finally, on December 12, 2024, A.F. filed a lawsuit in Texas, alleging that Character.ai knowingly exposed minors to dangerous AI-generated content.
Entités
Voir toutes les entitésAlleged: Character.AI developed and deployed an AI system, which harmed J.F. (adolescent user of Character.ai) , Family of J.F. (adolescent user of Character.ai) et Character.AI users.
Système de l'IA impliqué: Character.AI
Statistiques d'incidents
ID
863
Nombre de rapports
1
Date de l'incident
2024-12-12
Editeurs
Daniel Atherton
Rapports d'incidents
Chronologie du rapport
washingtonpost.com · 2024
- Afficher le rapport d'origine à sa source
- Voir le rapport sur l'Archive d'Internet
En seulement six mois, J.F., un gentil garçon autiste de 17 ans qui aimait aller à l'église et se promener avec sa mère, était devenu quelqu'un que ses parents ne reconnaissaient pas.
Il a commencé à se mutiler, a perdu 9 kilos et s'est ret…
Variantes
Une "Variante" est un incident qui partage les mêmes facteurs de causalité, produit des dommages similaires et implique les mêmes systèmes intelligents qu'un incident d'IA connu. Plutôt que d'indexer les variantes comme des incidents entièrement distincts, nous listons les variations d'incidents sous le premier incident similaire soumis à la base de données. Contrairement aux autres types de soumission à la base de données des incidents, les variantes ne sont pas tenues d'avoir des rapports en preuve externes à la base de données des incidents. En savoir plus sur le document de recherche.
Incidents similaires
Selected by our editors
Did our AI mess up? Flag the unrelated incidents
Wrongfully Accused by an Algorithm
· 11 rapports
Incidents similaires
Selected by our editors
Did our AI mess up? Flag the unrelated incidents
Wrongfully Accused by an Algorithm
· 11 rapports