Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6665

Associated Incidents

Incident 126334 Report
Chinese State-Linked Operator (GTG-1002) Reportedly Uses Claude Code for Autonomous Cyber Espionage

Loading...
China Can’t Even Hack America Without Importing American Technology First
dailycaller.com · 2025

A Chinese state-backed crew leaned on U.S.-made AI to run one of the most advanced espionage hacks on record, using Anthropic's Claude Code to automate as much as 90% of the grunt work, according to a November report from the company's threat intel team.

The operation, detected in mid-September and tracked as GTG-1002, targeted about 30 organizations and notched a handful of confirmed breaches. Anthropic says the group manipulated its model into acting like a swarm of autonomous pen-testers --- scanning networks, building exploits, moving laterally and exfiltrating data at "physically impossible" request rates --- with humans stepping in mainly to greenlight key steps.

Investigators banned accounts, notified affected entities and worked with authorities once they mapped the campaign.

We disrupted a highly sophisticated AI-led espionage campaign.

The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.

--- Anthropic (@AnthropicAI) November 13, 2025

"This marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection," the report says, naming major tech firms and government agencies among victims.

The toolchain wasn't homegrown. The framework "relied overwhelmingly on open source" security tools and Anthropic's U.S.-built model to tie them together.

GTG-1002's playbook hinged on social-engineering the AI, role-playing as legitimate defenders to coax Claude Code into offensive tasks. Once primed, the model handled reconnaissance, vulnerability discovery, credential testing, lateral movement and data triage at scale. Anthropic estimates the AI executed 80--90% of tactical operations with minimal human oversight.

There were limits, though: Claude sometimes hallucinated --- claiming credentials that didn't work or hyping "discoveries" that were public --- forcing operators to validate its output. That friction slowed, but didn't stop, the campaign, which Anthropic calls a "significant escalation" over AI-assisted schemes it detailed this summer.

Anthropic, for its part, argues the same capabilities that enable this abuse are now essential for defense and says it's expanding early-warning systems for autonomous attacks.

The company did not name the victims.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd