Associated Incidents
A Chinese state-backed crew leaned on U.S.-made AI to run one of the most advanced espionage hacks on record, using Anthropic's Claude Code to automate as much as 90% of the grunt work, according to a November report from the company's threat intel team.
The operation, detected in mid-September and tracked as GTG-1002, targeted about 30 organizations and notched a handful of confirmed breaches. Anthropic says the group manipulated its model into acting like a swarm of autonomous pen-testers --- scanning networks, building exploits, moving laterally and exfiltrating data at "physically impossible" request rates --- with humans stepping in mainly to greenlight key steps.
Investigators banned accounts, notified affected entities and worked with authorities once they mapped the campaign.
We disrupted a highly sophisticated AI-led espionage campaign.
The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.
--- Anthropic (@AnthropicAI) November 13, 2025
"This marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection," the report says, naming major tech firms and government agencies among victims.
The toolchain wasn't homegrown. The framework "relied overwhelmingly on open source" security tools and Anthropic's U.S.-built model to tie them together.
GTG-1002's playbook hinged on social-engineering the AI, role-playing as legitimate defenders to coax Claude Code into offensive tasks. Once primed, the model handled reconnaissance, vulnerability discovery, credential testing, lateral movement and data triage at scale. Anthropic estimates the AI executed 80--90% of tactical operations with minimal human oversight.
There were limits, though: Claude sometimes hallucinated --- claiming credentials that didn't work or hyping "discoveries" that were public --- forcing operators to validate its output. That friction slowed, but didn't stop, the campaign, which Anthropic calls a "significant escalation" over AI-assisted schemes it detailed this summer.
Anthropic, for its part, argues the same capabilities that enable this abuse are now essential for defense and says it's expanding early-warning systems for autonomous attacks.
The company did not name the victims.