Unknown cybercriminals
Incidents involved as Deployer
インシデント 104819 Report
Tennessee Meteorologist's Likeness Reportedly Used in Sextortion Campaign Involving Purported AI-Generated Content
2025-01-10
Bree Smith, a meteorologist in Nashville, Tennessee, was reportedly targeted in a sextortion campaign involving purported AI-generated deepfakes that manipulated her likeness into explicit content. According to reporting, Smith's face was digitally placed onto semi-nude and nude bodies, with the resulting media circulated online by impersonators seeking money. Smith documented the spread of these accounts and has advocated for legislative responses, including a new Tennessee bill addressing deepfake-related harms.
もっとインシデント 94113 Report
AI-Driven Phishing Scam Uses Deepfake Robocalls to Target Gmail Users in Credential Theft Campaign
2025-02-17
A sophisticated AI-powered phishing scam has reportedly been targeting between 1.8 to 2.5 billion Gmail users, allegedly using deepfake robocalls and phishing emails to steal credentials. Attackers impersonate Google security. They claim suspicious account activity and direct victims to fake Google login pages. The stolen credentials have reportedly been used for identity theft, banking fraud, and session hijacking. The FBI has urged vigilance as AI increases the scale and effectiveness of such attacks.
もっとインシデント 1476 Report
Reported AI-Cloned Voice Used to Deceive Hong Kong Bank Manager in Purported $35 Million Fraud Scheme
2020-01-15
In January 2020, a Hong Kong-based bank manager for a Japanese company reportedly authorized $35 million in transfers after receiving a call from someone whose voice matched the company director's. According to Emirati investigators, scammers used AI-based voice cloning to impersonate the executive. The fraud allegedly involved at least 17 individuals and reportedly led to global fund transfers that triggered a UAE investigation. U.S. authorities were reportedly later asked to help trace part of the funds sent to U.S. banks.
もっとインシデント 10545 Report
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development
2025-04-23
In April 2025, Anthropic published a report detailing several misuse cases involving its Claude LLM, all detected in March. These included an "influence-as-a-service" operation that orchestrated over 100 social media bots; an effort to scrape and test leaked credentials for security camera access; a recruitment fraud campaign targeting Eastern Europe; and a novice actor developing sophisticated malware. Anthropic banned the accounts involved but could not confirm downstream deployment.
もっと