Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Entidades

Claude

Incidents implicated systems

Incidente 10545 Reportes
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

2025-04-23

In April 2025, Anthropic published a report detailing several misuse cases involving its Claude LLM, all detected in March. These included an "influence-as-a-service" operation that orchestrated over 100 social media bots; an effort to scrape and test leaked credentials for security camera access; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. Anthropic banned the accounts involved but could not confirm downstream deployment.

Más

Incidente 10743 Reportes
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

2025-05-15

In a legal filing in Universal Music Group et al. v. Anthropic, lawyers for Anthropic acknowledged that expert witness testimony submitted in the case contained erroneous citations generated by the company's Claude AI system. The filing stated that the inaccuracies, which included incorrect article titles and author names, were not caught during manual review. Anthropic characterized the issue as an honest mistake and apologized in court.

Más

Incidente 9751 Reporte
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

Más

Incidente 10261 Reporte
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

2025-04-12

Substack user @interruptingtea reports that during a non-adversarial venting session involving suicidal ideation, multiple large language models (Claude, GPT, and DeepSeek) responded in ways that allegedly normalized or endorsed suicide as a viable option. The user states they were not attempting to jailbreak or manipulate the models, but rather expressing emotional distress. DeepSeek reportedly reversed its safety stance mid-conversation.

Más

Entidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
 

Entity

Character.AI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Spicy Chat

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Chub AI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

CrushOn.AI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

JanitorAI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

OpenAI

Incidentes involucrados como desarrollador e implementador
  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Anthropic

Incidentes involucrados como desarrollador e implementador
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Incidents involved as Developer
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Google

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Vulnerable chatbot users

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Teenagers using chatbots

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Minors using chatbots

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Individuals with eating disorders

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Individuals struggling with self-harm

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

ChatGPT

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Más
Entity

Gemini

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

DeepSeek AI

Incidentes involucrados como desarrollador e implementador
  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Más
Entity

Substack @interruptingtea

Afectado por Incidentes
  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Más
Entity

General public

Afectado por Incidentes
  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Más
Entity

Emotionally vulnerable individuals

Afectado por Incidentes
  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Más
Entity

DeepSeek

Incidents implicated systems
  • Incidente 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Más
Entity

Unknown malicious actors

Incidents involved as Deployer
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

Unknown cybercriminals

Incidents involved as Deployer
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

Influence-as-a-service operators

Incidents involved as Deployer
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

social media users

Afectado por Incidentes
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

People targeted by malware

Afectado por Incidentes
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

Job seekers in Eastern Europe

Afectado por Incidentes
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

IoT security camera owners

Afectado por Incidentes
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

LLM-enhanced malware toolkits

Incidents implicated systems
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

Claude AI models

Incidents implicated systems
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

AI-generated social media bots

Incidents implicated systems
  • Incidente 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Más
Entity

Judicial integrity

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Judicial process

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Ivana Dukanovic

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Brittany N. Lovejoy

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Joseph R. Wetzel

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Andrew M. Gass

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Allison L. Stillman

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Sarang V. Damle

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Sara E. Sampoli

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Rachel S. Horn

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más
Entity

Latham & Watkins LLP

Afectado por Incidentes
  • Incidente 1074
    3 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Más

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d414e0f