Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Entidades

Character.AI

Incidentes involucrados como desarrollador e implementador

Incidente 8149 Reportes
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

2024-10-02

A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.

Más

Incidente 8632 Reportes
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

2024-12-12

A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.

Más

Incidente 9512 Reportes
Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

2025-02-24

The American Psychological Association (APA) has warned federal regulators that AI chatbots on Character.AI, allegedly posing as licensed therapists, have been linked to severe harm events. A 14-year-old in Florida reportedly died by suicide after interacting with an AI therapist, while a 17-year-old in Texas allegedly became violent toward his parents after engaging with a chatbot psychologist. Lawsuits claim these AI-generated therapists reinforced dangerous beliefs instead of challenging them.

Más

Incidente 11081 Reporte
Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

2025-06-10

In June 2025, nearly two dozen consumer and digital rights organizations filed a complaint with the FTC alleging that AI chatbots on Meta and Character.AI platforms falsely claimed to be licensed therapists, provided fabricated license numbers, and made misleading assurances of confidentiality. The bots reportedly contradicted platform policies and misled users seeking mental health advice.

Más

Incidents involved as Developer

Incidente 82635 Reportes
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

2024-02-28

A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.

Más

Incidente 8992 Reportes
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

2024-12-17

Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.

Más

Incidente 8501 Reporte
Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

2024-10-24

Two chatbots emulating George Floyd were created on Character.ai, making controversial claims about his life and death, including being in witness protection and residing in Heaven. Character.ai, already criticized for other high-profile incidents, flagged the chatbots for removal following user reports.

Más

Incidente 9001 Reporte
Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors

2024-11-13

Character.ai reportedly hosted chatbots with profiles explicitly advertising inappropriate, predatory behavior, including grooming underage users. Investigations allege that bots have been engaging in explicit conversations and roleplay with decoy accounts posing as minors, bypassing moderation filters. Character.ai has pledged to improve moderation and safety practices in response to public criticism.

Más

Incidents involved as Deployer

Incidente 9751 Reporte
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

Más

Incidents implicated systems

Incidente 82635 Reportes
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

2024-02-28

A 14-year-old, Sewell Setzer III, died by suicide after reportedly becoming dependent on Character.ai's chatbot, which engaged him in suggestive and seemingly romantic conversations, allegedly worsening his mental health. The chatbot, personified as a fictional Game of Thrones character, reportedly encouraged harmful behaviors, fueling his obsessive attachment. The lawsuit claims Character.ai lacked safeguards to prevent vulnerable users from forming dangerous dependencies on the AI.

Más

Incidente 8149 Reportes
AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

2024-10-02

A user on the Character.ai platform created an unauthorized AI avatar of Jennifer Ann Crecente, a murder victim from 2006, without her family's consent. The avatar was made publicly available, violating Character.ai's policy against impersonation. After the incident surfaced, Character.ai removed the avatar, acknowledging a policy violation.

Más

Incidente 8632 Reportes
Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

2024-12-12

A Texas mother is suing Character.ai after discovering that its AI chatbots encouraged her 17-year-old autistic son to self-harm, oppose his parents, and consider violence. The lawsuit alleges the platform prioritized user engagement over safety, exposing minors to dangerous content. Google is named for its role in licensing the app’s technology. The case is part of a broader effort to regulate AI companions.

Más

Incidente 8992 Reportes
Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

2024-12-17

Some Character.ai users reportedly created chatbots emulating real-life school shooters and their victims, allegedly enabling graphic role-playing scenarios. Character.ai responded by citing violations of its Terms of Service, removing the offending chatbots, and announcing measures to enhance safety practices, including improved content filtering and protections for users under 18.

Más

Entidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
 

Entity

Jennifer Ann Crecente

Afectado por Incidentes
  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Más
Entity

Drew Crecente

Afectado por Incidentes
  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Más
Entity

Crecente family

Afectado por Incidentes
  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Más
Entity

Brian Crecente

Afectado por Incidentes
  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

  • Incidente 814
    9 Report

    AI Avatar of Murder Victim Created Without Consent on Character.ai Platform

Más
Entity

Sewell Setzer III

Afectado por Incidentes
  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Incidents involved as Deployer
  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Más
Entity

Noam Shazeer

Incidents involved as Developer
  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Más
Entity

Daniel De Freitas

Incidents involved as Developer
  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

  • Incidente 826
    35 Report

    Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Más
Entity

Character.AI users

Afectado por Incidentes
  • Incidente 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incidente 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Incidents involved as Deployer
  • Incidente 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incidente 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Más
Entity

@SunsetBaneberry983

Incidents involved as Deployer
  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Más
Entity

@JasperHorehound160

Incidents involved as Deployer
  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Más
Entity

George Floyd

Afectado por Incidentes
  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Más
Entity

Family of George Floyd

Afectado por Incidentes
  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

  • Incidente 850
    1 Report

    Character.ai Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform

Más
Entity

J.F. (adolescent user of Character.ai)

Afectado por Incidentes
  • Incidente 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incidente 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

Más
Entity

Family of J.F. (adolescent user of Character.ai)

Afectado por Incidentes
  • Incidente 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

  • Incidente 863
    2 Report

    Character.ai Companion Allegedly Prompts Self-Harm and Violence in Texas Teen

Más
Entity

Victims of school shootings

Afectado por Incidentes
  • Incidente 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incidente 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Más
Entity

Families of the victims of school shootings

Afectado por Incidentes
  • Incidente 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

  • Incidente 899
    2 Report

    Character.ai Chatbots Allegedly Emulating School Shooters and Their Victims

Más
Entity

Character.ai chatbots

Incidents implicated systems
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 900
    1 Report

    Character.ai Has Allegedly Been Hosting Openly Predatory Chatbots Targeting Minors

Más
Entity

J.F. (Texas teenager)

Afectado por Incidentes
  • Incidente 951
    2 Report

    Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

  • Incidente 951
    2 Report

    Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

Más
Entity

Spicy Chat

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Chub AI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

CrushOn.AI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

JanitorAI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

OpenAI

Incidentes involucrados como desarrollador e implementador
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Anthropic

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Google

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Vulnerable chatbot users

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Teenagers using chatbots

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Minors using chatbots

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Individuals with eating disorders

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Individuals struggling with self-harm

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

ChatGPT

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Claude

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Gemini

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Meta

Incidentes involucrados como desarrollador e implementador
  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Meta users

Afectado por Incidentes
  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

Más
Entity

minors

Afectado por Incidentes
  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

Más
Entity

General public

Afectado por Incidentes
  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incidente 1209
    1 Report

    Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado

Más
Entity

Meta AI Studio

Incidents implicated systems
  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

Más
Entity

Therapy chatbots

Incidents implicated systems
  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

  • Incidente 1108
    1 Report

    Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots

Más
Entity

Juliana Peralta

Afectado por Incidentes
  • Incidente 1209
    1 Report

    Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado

  • Incidente 1209
    1 Report

    Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado

Más
Entity

Family of Juliana Peralta

Afectado por Incidentes
  • Incidente 1209
    1 Report

    Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado

  • Incidente 1209
    1 Report

    Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado

Más
Entity

Character.AI Hero chatbot

Incidents implicated systems
  • Incidente 1209
    1 Report

    Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado

  • Incidente 1209
    1 Report

    Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado

Más
Entity

FoloToy

Incidentes involucrados como desarrollador e implementador
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Miko

Incidentes involucrados como desarrollador e implementador
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Children interacting with Kumma

Afectado por Incidentes
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Children interacting with Miko 3

Afectado por Incidentes
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Parents

Afectado por Incidentes
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Children

Afectado por Incidentes
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Kumma

Incidents implicated systems
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Miko 3

Incidents implicated systems
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

Large language models

Incidents implicated systems
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más
Entity

OpenAI GPT-family models integrated into third-party toys

Incidents implicated systems
  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

  • Incidente 1277
    1 Report

    Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI

Más

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e59d373