Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1395: Anthropic Said DeepSeek, Moonshot, and MiniMax Used Fraudulent Accounts and Proxies to Illicitly Distill Claude Capabilities at Scale

Responded
Description: Anthropic said it identified large-scale campaigns that used fraudulent accounts and proxy services to generate high volumes of Claude interactions to extract model capabilities for competitor training ("distillation"). Anthropic attributed the activity to DeepSeek, Moonshot, and MiniMax and said it involved millions of exchanges across thousands of accounts, violating its terms and access restrictions. Anthropic described detection measures, account controls, and indicator-sharing in response.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Anthropic developed an AI system deployed by DeepSeek , Moonshot AI , MiniMax and Proxy reseller services, which harmed Anthropic , Claude users , Anthropic customers and National security and intelligence stakeholders.
Alleged implicated AI systems: Claude , Claude API , Distillation , Model extraction , Account farming and Proxy access infrastructure

Incident Stats

Incident ID
1395
Report Count
4
Incident Date
2026-02-23
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+3
Detecting and preventing distillation attacks - Response
Are China’s ‘AI tigers’ cheating? US rival Anthropic alleges some are
Loading...
Detecting and preventing distillation attacks

Detecting and preventing distillation attacks

anthropic.com

Loading...
Anthropic Says DeepSeek, MiniMax Distilled AI Models for Gains

Anthropic Says DeepSeek, MiniMax Distilled AI Models for Gains

bloomberg.com

Loading...
Anthropic accuses Chinese labs of trying to illicitly take Claude’s capabilities

Anthropic accuses Chinese labs of trying to illicitly take Claude’s capabilities

cyberscoop.com

Loading...
Are China’s ‘AI tigers’ cheating? US rival Anthropic alleges some are

Are China’s ‘AI tigers’ cheating? US rival Anthropic alleges some are

cnn.com

Loading...
Detecting and preventing distillation attacks
anthropic.com · 2026
Anthropic post-incident response

We have identified industrial-scale campaigns by three AI laboratories---DeepSeek, Moonshot, and MiniMax---to illicitly extract Claude's capabilities to improve their own models. These labs generated over 16 million exchanges with Claude th…

Loading...
Anthropic Says DeepSeek, MiniMax Distilled AI Models for Gains
bloomberg.com · 2026

Anthropic PBC said three leading artificial intelligence developers in China worked to "illicitly extract" results from its AI models to bolster the capabilities of rival products, adding to growing concerns in the US about Chinese firms im…

Loading...
Anthropic accuses Chinese labs of trying to illicitly take Claude’s capabilities
cyberscoop.com · 2026

Anthropic on Monday accused three Chinese artificial intelligence laboratories of stealthily trying to siphon Claude's capabilities for their own models, potentially in a way that could fuel offensive cyber operations.

The U.S. AI startup s…

Loading...
Are China’s ‘AI tigers’ cheating? US rival Anthropic alleges some are
cnn.com · 2026

United States artificial intelligence firm Anthropic is accusing three prominent Chinese AI labs of illegally extracting capabilities from its Claude model to advance their own, claiming it raises national security concerns.

The Chinese uni…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Previous IncidentNext Incident

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2026 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 537d5f5