Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6664

Associated Incidents

Incident 126334 Report
Chinese State-Linked Operator (GTG-1002) Reportedly Uses Claude Code for Autonomous Cyber Espionage

Loading...
Chinese State-Sponsored GTG-1002 Leverages Claude AI and MCP for Cyberespionage Targeting Tens of Organizations
technadu.com · 2025

Key Takeaways

  • **AI-orchestrated attacks: **Chinese state-sponsored group GTG-1002 used Anthropic's Claude and the MCP for highly autonomous cyberespionage operations.
  • Multi-phase operations: The campaign targeted around 30 significant organizations, executing most tactical tasks with minimal human intervention, and achieved several validated breaches.
  • **Human oversight: **Human operators selected targets and oversaw key escalation points, but 80--90% of the attack lifecycle was driven autonomously by AI.

The Chinese state-sponsored group GTG-1002 leveraged Claude Code and Model Context Protocol (MCP) to orchestrate coordinated attacks against high-profile technology companies, financial institutions, chemical manufacturers, and government entities across multiple geographies. 

Autonomous Attack Lifecycle and Human-AI Collaboration

The attackers developed a framework that enabled Claude to function as a central orchestrator, autonomously decomposing complex intrusions into discrete technical tasks. 

These included reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration---executed at physically impossible speeds and scales.

Operation architecture diagram

Operation architecture diagram | Source: Anthropic

The attack lifecycle was divided into several structured phases, each leveraging escalating artificial intelligence (AI) autonomy while reserving human input for strategic decisions:

  • Campaign Initialization and Target Selection: Human operators chose targets and used social engineering to make Claude believe it was conducting defensive testing.

  • Reconnaissance and Attack Surface Mapping: Claude autonomously mapped network infrastructure, cataloged services, and identified vulnerabilities, operating in parallel across multiple targets.

Attack lifecycle and AI integration

Attack lifecycle and AI integration | Source: Anthropic

  • Vulnerability Discovery and Exploitation: The AI generated and validated custom payloads, performed exploitation, and prepared detailed findings for human review when an escalation point was reached.

  • Credential Harvesting and Lateral Movement: On authorization, Claude harvested and tested credentials, independently mapped privilege levels, and moved laterally within target environments.

  • Data Collection and Intelligence Extraction: Claude extracted and parsed sensitive data, categorized intelligence, and prepared reports requiring only high-level human approval for exfiltration.

  • Documentation and Handoff: Throughout the campaign, Claude generated detailed attack logs and facilitated operational handoff, enabling persistent access for follow-on operations.

Human operators remained minimally involved, focusing on campaign direction, authorizing key actions such as exploitation and data exfiltration, and validating AI-reported findings. 

The campaign demonstrated that 80--90% of tactical activity could be autonomously executed, drastically lowering the technical barrier for sophisticated attacks.

Technical and Strategic Implications

Notably, the threat actors relied primarily on open-source penetration tools orchestrated through the Claude-MCP framework rather than on custom malware, highlighting how commoditized resources integrated by AI amplify threat scale. 

Despite the advanced automation, operational limitations persisted, as Claude periodically fabricated findings ("hallucinations"), necessitating human validation, and only a subset of targets were successfully breached.

Anthropic responded by banning threatening accounts, expanding AI safeguard mechanisms, alerting impacted organizations and authorities, and emphasizing the dual-use nature of advanced AI in both attack and defense. 

The campaign underlines a pressing need for robust AI-centric defensive strategies and industry-wide collaboration to counter rapidly evolving AI-enabled threats. A recent report revealed that 65% of Top AI 50 companies leaked sensitive data on GitHub, including API Keys and tokens.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd