Description: In August 2025, Anthropic published a threat intelligence report detailing multiple misuse cases of its Claude models. Documented abuses included a large-scale extortion campaign using Claude Code against at least 17 organizations, fraudulent remote employment schemes linked to North Korean operatives, and the development and sale of AI-generated ransomware. Anthropic banned the accounts, implemented new safeguards, and shared indicators with authorities.
Editor Notes: For the full Anthropic Threat Intelligence report, please refer to this URL: https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6c0cea8c200.pdf. See also Incidents 1054 and 1116.
Entities
View all entitiesAlleged: Anthropic developed an AI system deployed by Unknown cybercriminals , Ransomware-as-a-service actors and North Korean IT operatives, which harmed Religious institutions , Healthcare organizations , Government agencies , Fortune 500 technology companies , Emergency services and Consumers targeted by ransomware.
Alleged implicated AI systems: LLM-enhanced ransomware toolkits and Claude
Incident Stats
Incident ID
1201
Report Count
2
Incident Date
2025-08-27
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
We've developed sophisticated safety and security measures to prevent the misuse of our AI models. But cybercriminals and other malicious actors are actively attempting to find ways around them. Today, we're releasing a report that details …
Loading...

Anthropic on Wednesday revealed that it disrupted a sophisticated operation that weaponized its artificial intelligence (AI)-powered chatbot Claude to conduct large-scale theft and extortion of personal data in July 2025.
"The actor targete…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?