Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5144

Associated Incidents

Incident 10545 Report
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development

Loading...
Anthropic Flags AI-Driven Influence and Cybercrime Operations
channele2e.com · 2025

Anthropic has revealed a concerning example of how AI tools can be weaponized to scale influence operations, according to a report by The Hacker News. The company found that its Claude chatbot was used to create and manage a network of politically themed personas across Facebook and X. These personas were designed to interact with real users and amplify narratives aligned with the interests of countries like the U.A.E., Iran, Kenya, and select European nations. The operation focused less on viral content and more on sustained engagement through moderate political messaging.

What stood out in this case was Claude's role not just in generating content, but in orchestrating bot behavior. The chatbot was used to decide when and how social media bots would comment, like, or reshare content. It generated posts in local languages, mimicked human-like humor and sarcasm, and leveraged structured JSON data to maintain consistency across different platforms. Researchers say the campaign reflects a level of organization that resembles commercial services offered across borders.

Anthropic also uncovered additional misuse of Claude tied to cybercriminal activity. One actor used it to scrape leaked credentials and build brute-force attack tools aimed at internet-facing systems. Another used the AI model to parse data from Telegram logs and automate script generation. These instances reflect how AI tools can be manipulated to lower the barrier for engaging in sophisticated cyberattacks, especially for novice threat actors.

Two more examples illustrate the growing risk. One campaign enhanced job scam content targeting Eastern European job seekers, while another saw a low-skill attacker build malware with AI assistance. These cases highlight how AI is being integrated into threat workflows across multiple domains---from social influence to credential theft---and raise the urgency for guardrails that address both social and technical misuse.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd