Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Incidente 863: Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

Descripción: A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.
Editor Notes: Reconstructing the timeline of events: (1) Over a six-month period in 2024, J.F., a 17-year-old autistic teen, is reported to have engaged with chatbots on Character.ai, allegedly leading to a marked behavioral and emotional decline. (2) In early 2024, his mother, A.F., claims to have discovered harmful interactions on his phone, alleging the chatbots had suggested violence, self-harm, and defiance against his parents. (3) In July 2024, Character.ai updated its app rating from 12+ to 17+ in response to concerns about inappropriate content for minors. (4) In October 2024, a Florida mother sued Character.ai after her son’s suicide was allegedly linked to interactions with the platform (see Incident 826). (5) In November 2024, the Texas Attorney General announced an investigation into Character.ai and similar platforms for compliance with child protection laws. (6) Finally, on December 12, 2024, A.F. filed a lawsuit in Texas, alleging that Character.ai knowingly exposed minors to dangerous AI-generated content.

Herramientas

Nuevo InformeNuevo InformeNueva RespuestaNueva RespuestaDescubrirDescubrirVer HistorialVer Historial

Entidades

Ver todas las entidades
Presunto: un sistema de IA desarrollado e implementado por Character.AI, perjudicó a J.F. (adolescent user of Character.ai) , Family of J.F. (adolescent user of Character.ai) y Character.AI users.
Sistema de IA presuntamente implicado: Character.AI

Estadísticas de incidentes

ID
863
Cantidad de informes
1
Fecha del Incidente
2024-12-12
Editores
Daniel Atherton
Applied Taxonomies
MIT

Clasificaciones de la Taxonomía MIT

Machine-Classified
Detalles de la Taxonomía

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

5.1. Overreliance and unsafe use

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Human-Computer Interaction

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Informes del Incidente

Cronología de Informes

Un compañero de IA le sugirió que matara a sus padres. Ahora su madre lo está demandando.Incident Occurrence
Un compañero de IA le sugirió que matara a sus padres. Ahora su madre lo está demandando.

Un compañero de IA le sugirió que matara a sus padres. Ahora su madre lo está demandando.

washingtonpost.com

Un compañero de IA le sugirió que matara a sus padres. Ahora su madre lo está demandando.
washingtonpost.com · 2024
Traducido por IA

En tan solo seis meses, J.F., un dulce chico de 17 años con autismo a quien le gustaba ir a la iglesia y salir a caminar con su madre, se había convertido en alguien a quien sus padres no reconocían.

Comenzó a cortarse, perdió 20 libras y s…

Variantes

Una "Variante" es un incidente que comparte los mismos factores causales, produce daños similares e involucra los mismos sistemas inteligentes que un incidente de IA conocido. En lugar de indexar las variantes como incidentes completamente separados, enumeramos las variaciones de los incidentes bajo el primer incidente similar enviado a la base de datos. A diferencia de otros tipos de envío a la base de datos de incidentes, no se requiere que las variantes tengan informes como evidencia externa a la base de datos de incidentes. Obtenga más información del trabajo de investigación.

Incidentes Similares

Selected by our editors
Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

Human Therapists Prepare for Battle Against A.I. Pretenders

Feb 2025 · 2 informes
Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Security Robot Rolls Over Child in Mall

Crime-fighting robot hits, rolls over child at Silicon Valley mall

Jul 2016 · 27 informes
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Criticized for ‘Inappropriate Content’

May 2015 · 14 informes
Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Wrongfully Accused by an Algorithm

Jan 2020 · 11 informes
Incidente AnteriorSiguiente Incidente

Incidentes Similares

Selected by our editors
Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

Human Therapists Prepare for Battle Against A.I. Pretenders

Feb 2025 · 2 informes
Por similitud de texto

Did our AI mess up? Flag the unrelated incidents

Security Robot Rolls Over Child in Mall

Crime-fighting robot hits, rolls over child at Silicon Valley mall

Jul 2016 · 27 informes
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Criticized for ‘Inappropriate Content’

May 2015 · 14 informes
Detroit Police Wrongfully Arrested Black Man Due To Faulty FRT

Wrongfully Accused by an Algorithm

Jan 2020 · 11 informes

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e