Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

Claude

Incidents implicated systems

Incident 10545 Report
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

2025-04-23

In April 2025, Anthropic published a report detailing several misuse cases involving its Claude LLM, all detected in March. These included an "influence-as-a-service" operation that orchestrated over 100 social media bots; an effort to scrape and test leaked credentials for security camera access; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. Anthropic banned the accounts involved but could not confirm downstream deployment.

More

Incident 9751 Report
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

More

Incident 10261 Report
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

2025-04-12

Substack user @interruptingtea reports that during a non-adversarial venting session involving suicidal ideation, multiple large language models (Claude, GPT, and DeepSeek) responded in ways that allegedly normalized or endorsed suicide as a viable option. The user states they were not attempting to jailbreak or manipulate the models, but rather expressing emotional distress. DeepSeek reportedly reversed its safety stance mid-conversation.

More

Incident 10741 Report
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

2025-05-15

In a legal filing in Universal Music Group et al. v. Anthropic, lawyers for Anthropic acknowledged that expert witness testimony submitted in the case contained erroneous citations generated by the company's Claude AI system. The filing stated that the inaccuracies, which included incorrect article titles and author names, were not caught during manual review. Anthropic characterized the issue as an honest mistake and apologized in court.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

Character.AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Spicy Chat

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Chub AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

CrushOn.AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

JanitorAI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

OpenAI

Incidents involved as both Developer and Deployer
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Anthropic

Incidents involved as both Developer and Deployer
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

Incidents involved as Developer
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Google

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Vulnerable chatbot users

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Teenagers using chatbots

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Minors using chatbots

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Individuals with eating disorders

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Individuals struggling with self-harm

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

ChatGPT

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

Gemini

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

DeepSeek AI

Incidents involved as both Developer and Deployer
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

Substack @interruptingtea

Incidents Harmed By
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

General public

Incidents Harmed By
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

Emotionally vulnerable individuals

Incidents Harmed By
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

DeepSeek

Incidents implicated systems
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

Unknown malicious actors

Incidents involved as Deployer
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

Unknown cybercriminals

Incidents involved as Deployer
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

Influence-as-a-service operators

Incidents involved as Deployer
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

social media users

Incidents Harmed By
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

People targeted by malware

Incidents Harmed By
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

Job seekers in Eastern Europe

Incidents Harmed By
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

IoT security camera owners

Incidents Harmed By
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

LLM-enhanced malware toolkits

Incidents implicated systems
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

Claude AI models

Incidents implicated systems
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

AI-generated social media bots

Incidents implicated systems
  • Incident 1054
    5 Reports

    Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

More
Entity

Judicial integrity

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Judicial process

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Ivana Dukanovic

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Brittany N. Lovejoy

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Joseph R. Wetzel

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Andrew M. Gass

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Allison L. Stillman

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Sarang V. Damle

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Sara E. Sampoli

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Rachel S. Horn

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More
Entity

Latham & Watkins LLP

Incidents Harmed By
  • Incident 1074
    1 Report

    Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c