Claude
Incidents implicated systems
Incident 10545 Rapports
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development
2025-04-23
In April 2025, Anthropic published a report detailing several misuse cases involving its Claude LLM, all detected in March. These included an "influence-as-a-service" operation that orchestrated over 100 social media bots; an effort to scrape and test leaked credentials for security camera access; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. Anthropic banned the accounts involved but could not confirm downstream deployment.
PlusIncident 11865 Rapports
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
2025-07-31
Across 2024 and 2025, the share features in multiple LLM platforms, including ChatGPT, Claude, Copilot, Qwen, Mistral, and Grok, allegedly exposed user conversations marked "discoverable" to search engines and archiving services. Over 100,000 chats were reportedly indexed and later scraped, purportedly revealing API keys, access tokens, personal identifiers, and sensitive business data.
PlusIncident 10743 Rapports
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel
2025-05-15
In a legal filing in Universal Music Group et al. v. Anthropic, lawyers for Anthropic acknowledged that expert witness testimony submitted in the case contained erroneous citations generated by the company's Claude AI system. The filing stated that the inaccuracies, which included incorrect article titles and author names, were not caught during manual review. Anthropic characterized the issue as an honest mistake and apologized in court.
PlusIncident 12012 Rapports
Anthropic Reportedly Identifies AI Misuse in Extortion Campaigns, North Korean IT Schemes, and Ransomware Sales
2025-08-27
In August 2025, Anthropic published a threat intelligence report detailing multiple misuse cases of its Claude models. Documented abuses included a large-scale extortion campaign using Claude Code against at least 17 organizations, fraudulent remote employment schemes linked to North Korean operatives, and the development and sale of AI-generated ransomware. Anthropic banned the accounts, implemented new safeguards, and shared indicators with authorities.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
OpenAI
Incidents impliqués en tant que développeur et déployeur
- Incident 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- Incident 10261 Report
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session
Incidents involved as Developer
Anthropic
Incidents impliqués en tant que développeur et déployeur
- Incident 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- Incident 10743 Report
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel