Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1054: Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Description: In April 2025, Anthropic published a report detailing several misuse cases involving its Claude LLM, all detected in March. These included an "influence-as-a-service" operation that orchestrated over 100 social media bots; an effort to scrape and test leaked credentials for security camera access; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. Anthropic banned the accounts involved but could not confirm downstream deployment.
Editor Notes: Timeline notes: Anthropic's report details separate case studies from March 2025, but published their findings on 04/23/2025, which is the date this incident ID takes.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Anthropic developed an AI system deployed by Unknown malicious actors , Unknown cybercriminals and Influence-as-a-service operators, which harmed social media users , People targeted by malware , Job seekers in Eastern Europe and IoT security camera owners.
Alleged implicated AI systems: LLM-enhanced malware toolkits , Claude AI models , Claude and AI-generated social media bots

Incident Stats

Incident ID
1054
Report Count
5
Incident Date
2025-04-23
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Detecting and Countering Malicious Uses of Claude: March 2025
Why Prompts Are the New IOCs You Didn’t See Coming!Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence CampaignAnthropic Flags AI-Driven Influence and Cybercrime OperationsClaude AI Abused in Influence-as-a-Service Operations and Campaigns
Detecting and Countering Malicious Uses of Claude: March 2025

Detecting and Countering Malicious Uses of Claude: March 2025

anthropic.com

Why Prompts Are the New IOCs You Didn’t See Coming!

Why Prompts Are the New IOCs You Didn’t See Coming!

blog.securitybreak.io

Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign

Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign

thehackernews.com

Anthropic Flags AI-Driven Influence and Cybercrime Operations

Anthropic Flags AI-Driven Influence and Cybercrime Operations

channele2e.com

Claude AI Abused in Influence-as-a-Service Operations and Campaigns

Claude AI Abused in Influence-as-a-Service Operations and Campaigns

gbhackers.com

Detecting and Countering Malicious Uses of Claude: March 2025
anthropic.com · 2025

We are committed to preventing misuse of our Claude models by adversarial actors while maintaining their utility for legitimate users. While our safety measures successfully prevent many harmful outputs, threat actors continue to explore me…

Why Prompts Are the New IOCs You Didn’t See Coming!
blog.securitybreak.io · 2025

LLMs and generative AI systems are rapidly deployed across industries, and their scale is creating fresh opportunities for threat actors.

Recently, a threat report from Anthropic discussed malicious uses of the AI model Claude. While the re…

Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign
thehackernews.com · 2025

Artificial intelligence (AI) company Anthropic has revealed that unknown threat actors leveraged its Claude chatbot for an "influence-as-a-service" operation to engage with authentic accounts across Facebook and X.

The sophisticated activit…

Anthropic Flags AI-Driven Influence and Cybercrime Operations
channele2e.com · 2025

Anthropic has revealed a concerning example of how AI tools can be weaponized to scale influence operations, according to a report by The Hacker News. The company found that its Claude chatbot was used to create and manage a network of poli…

Claude AI Abused in Influence-as-a-Service Operations and Campaigns
gbhackers.com · 2025

Claude AI, developed by Anthropic, has been exploited by malicious actors in a range of adversarial operations, most notably a financially motivated "influence-as-a-service" campaign.

This operation leveraged Claude's advanced language capa…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

AI-Generated Profiles Used in Disinformation Campaign Targeting Ukrainians

AI-Generated Profiles Used in Disinformation Campaign Targeting Ukrainians

Feb 2022 · 4 reports
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
DALL-E 2 Reported for Gender and Racially Biased Outputs

DALL-E 2 Reported for Gender and Racially Biased Outputs

Apr 2022 · 3 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

AI-Generated Profiles Used in Disinformation Campaign Targeting Ukrainians

AI-Generated Profiles Used in Disinformation Campaign Targeting Ukrainians

Feb 2022 · 4 reports
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
DALL-E 2 Reported for Gender and Racially Biased Outputs

DALL-E 2 Reported for Gender and Racially Biased Outputs

Apr 2022 · 3 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c