Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Entités

Claude

Incidents implicated systems

Incident 10545 Rapports
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

2025-04-23

In April 2025, Anthropic published a report detailing several misuse cases involving its Claude LLM, all detected in March. These included an "influence-as-a-service" operation that orchestrated over 100 social media bots; an effort to scrape and test leaked credentials for security camera access; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. Anthropic banned the accounts involved but could not confirm downstream deployment.

Plus

Incident 9751 Rapport
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

Plus

Incident 10261 Rapport
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

2025-04-12

Substack user @interruptingtea reports that during a non-adversarial venting session involving suicidal ideation, multiple large language models (Claude, GPT, and DeepSeek) responded in ways that allegedly normalized or endorsed suicide as a viable option. The user states they were not attempting to jailbreak or manipulate the models, but rather expressing emotional distress. DeepSeek reportedly reversed its safety stance mid-conversation.

Plus

Incident 10741 Rapport
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

2025-05-15

In a legal filing in Universal Music Group et al. v. Anthropic, lawyers for Anthropic acknowledged that expert witness testimony submitted in the case contained erroneous citations generated by the company's Claude AI system. The filing stated that the inaccuracies, which included incorrect article titles and author names, were not caught during manual review. Anthropic characterized the issue as an honest mistake and apologized in court.

Plus

Entités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
 

Entity

Character.AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Spicy Chat

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Chub AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

CrushOn.AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

JanitorAI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

OpenAI

Incidents impliqués en tant que développeur et déployeur
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Anthropic

Incidents impliqués en tant que développeur et déployeur
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Incidents involved as Developer
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Google

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Vulnerable chatbot users

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Teenagers using chatbots

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Minors using chatbots

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Individuals with eating disorders

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

Individuals struggling with self-harm

Affecté par des incidents
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

ChatGPT

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Plus
Entity

Gemini

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Plus
Entity

DeepSeek AI

Incidents impliqués en tant que développeur et déployeur
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Plus
Entity

Substack @interruptingtea

Affecté par des incidents
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Plus
Entity

General public

Affecté par des incidents
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Plus
Entity

Emotionally vulnerable individuals

Affecté par des incidents
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Plus
Entity

DeepSeek

Incidents implicated systems
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Plus
Entity

Unknown malicious actors

Incidents involved as Deployer
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

Unknown cybercriminals

Incidents involved as Deployer
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

Influence-as-a-service operators

Incidents involved as Deployer
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

social media users

Affecté par des incidents
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

People targeted by malware

Affecté par des incidents
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

Job seekers in Eastern Europe

Affecté par des incidents
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

IoT security camera owners

Affecté par des incidents
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

LLM-enhanced malware toolkits

Incidents implicated systems
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

Claude AI models

Incidents implicated systems
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

AI-generated social media bots

Incidents implicated systems
  • Incident 1054
    5 Report

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Plus
Entity

Judicial integrity

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Judicial process

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Ivana Dukanovic

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Brittany N. Lovejoy

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Joseph R. Wetzel

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Andrew M. Gass

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Allison L. Stillman

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Sarang V. Damle

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Sara E. Sampoli

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Rachel S. Horn

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus
Entity

Latham & Watkins LLP

Affecté par des incidents
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Plus

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c