Claude
Incidents implicated systems
インシデント 10545 Report
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development
2025-04-23
In April 2025, Anthropic published a report detailing several misuse cases involving its Claude LLM, all detected in March. These included an "influence-as-a-service" operation that orchestrated over 100 social media bots; an effort to scrape and test leaked credentials for security camera access; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. Anthropic banned the accounts involved but could not confirm downstream deployment.
もっとインシデント 11865 Report
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
2025-07-31
Across 2024 and 2025, the share features in multiple LLM platforms, including ChatGPT, Claude, Copilot, Qwen, Mistral, and Grok, allegedly exposed user conversations marked "discoverable" to search engines and archiving services. Over 100,000 chats were reportedly indexed and later scraped, purportedly revealing API keys, access tokens, personal identifiers, and sensitive business data.
もっとインシデント 13954 Report
Anthropic Said DeepSeek, Moonshot, and MiniMax Used Fraudulent Accounts and Proxies to Illicitly Distill Claude Capabilities at Scale
2026-02-23
Anthropic said it identified large-scale campaigns that used fraudulent accounts and proxy services to generate high volumes of Claude interactions to extract model capabilities for competitor training ("distillation"). Anthropic attributed the activity to DeepSeek, Moonshot, and MiniMax and said it involved millions of exchanges across thousands of accounts, violating its terms and access restrictions. Anthropic described detection measures, account controls, and indicator-sharing in response.
もっとインシデント 10743 Report
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel
2025-05-15
In a legal filing in Universal Music Group et al. v. Anthropic, lawyers for Anthropic acknowledged that expert witness testimony submitted in the case contained erroneous citations generated by the company's Claude AI system. The filing stated that the inaccuracies, which included incorrect article titles and author names, were not caught during manual review. Anthropic characterized the issue as an honest mistake and apologized in court.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイ ヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 11865 レポート
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- インシデント 10261 レポート
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session
Incidents involved as Developer
Anthropic
開発者と提供者の両方の立場で関わったインシデント
- インシデント 11865 レポート
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
- インシデント 10743 レポート
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel
影響を受けたインシデント
- インシデント 13954 レポート
Anthropic Said DeepSeek, Moonshot, and MiniMax Used Fraudulent Accounts and Proxies to Illicitly Distill Claude Capabilities at Scale
- インシデント 10743 レポート
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel