Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
AIID Blog

AI Incident Roundup – April and May 2025

Posted 2025-06-15 by Daniel Atherton.

Our review newsletter recapping new incident IDs in the AI Incident Database and looking at the trends.

🗄 Trending in the AIID

Across April and May 2025, over seventy new incident IDs were added to the AI Incident Database. These incident IDs cover a range of current and ongoing events as well as newly surfaced harms from earlier months and years. The continued expansion of AI systems into everyday life and infrastructures has created a dense web of vulnerabilities, in which harms are increasingly common and more layered; the process of untangling the interactions between systems and actors, as well as sociotechnical outcomes, exposes tensions we are only beginning to understand. But that also offers new ground for inquiry and reform.

Many of the incidents from this period point to the continued intensification of earlier trends we have previously noted, such as deepfake scams, AI-enhanced fraud, abuse of generative tools to produce nonconsensual or harmful content, and the diffusion of mis- and disinformation at global scale. At the same time, we have also been documenting modes of harm such as underrecognized forms of epistemic damage and the ambient abuse of AI systems, as well as failures in safety interventions. We are expanding our capacity to cover AI incidents arising throughout the Global South and are actively monitoring press from around the world. As ever, whenever approaching the work of the AI Incident Database, methodological considerations about the incident data we record is bounded by what enters public view; its archival capability is still a lagging indicator. What follows is a thematic snapshot of what has entered view in this two-month period.

Financial Fraud, Voice Cloning, and Impersonation Scams

Voice cloning and identity manipulation continue to drive high-impact scams across borders and demographics, and on many different platforms. Several recent cases (Incidents 1008, 1012, 1045, 1053, 1065, 1085) involve scammers deploying AI-generated voices of family members (children, siblings) and friends to coerce money, often exploiting urgency and emotional distress. In parallel, we’ve seen the use of AI to create full synthetic identities or impersonate trusted figures, from fake job applicants (1021), to forged support bots (1039), to clone sites (1019), to seemingly legitimate documentation and branding materials (1047, 1064, 1066).

At scale, these methods are being weaponized to attack infrastructure (1037, 1054, 1055), exploit global financial networks (1067), and fraudulently obtain federal aid (1038). The breadth of these operations, including vishing campaigns targeting government officials (1077, 1085), has only reinforced concerns about how quickly AI-enhanced fraud has evolved into a systemic threat. What makes many of these incidents especially troubling is the way they collapse social trust into technical vulnerability. The voice on the phone doesn’t just sound familiar—it is familiar, and it is generated to evoke a specific emotional response, and it is deployed at precisely the moment when verification is least likely. In these cases, the harm is not relegated to just the realm of the financial; it stems from the exploitation of relational knowledge, often leaving victims doubting their own instincts and memories. We can call this a weaponization of intimacy, one that is made scalable through generative tools, and one that transforms opportunistic scams into engineered manipulations of trust itself.

Deepfakes and Disinformation Campaigns

We have been continuing to track a global sprawl of deepfake disinformation incidents, many implicating high-profile political figures, and many now being assigned incident IDs well after the fact, as we revisit overlooked cases from the past year or more. Personalities across Africa (1022–1025, 1032–1035, 1051–1052), South America (1079), Europe (1056, 1063–1064), Australia and New Zealand (1076), and South Asia (1029) were depicted in manipulated videos advancing false claims or scam campaigns. These range from election meddling to false endorsements and geopolitical deception, as in Sudan, where multiple audio-based operations (1087–1089) used AI-generated content to fill an information vacuum. State-aligned and gray-zone actors also played a significant role. Russia-linked Operation Overload (also known as Matryoshka or Storm-1679) (1060–1061) was one of the most visible campaigns of the period, while reports tied to Anthropic (1054) and Microsoft (1037) detailed the role of AI tools in credential stuffing, malware deployment, and covert recruitment efforts.

The geographic breadth and tactical diversity of deepfake disinformation campaigns stand in contrast to the slower pace of institutional documentation and incident indexing. While many of these cases were publicly reported when they occurred, they were only recently added to the database, which is a result of resource limitations and the evolving process by which incidents become legible as discrete, documentable events. This is part of why we continue to encourage the submission of incident reports. Visibility often depends not just on what happens, but on when and how it is recognized. By returning to earlier cases and assigning them IDs, the database helps re-anchor these events in time. They can be regarded not so much as passing controversies but as components of a persistent and expanding harm environment. The incidents span familiar tactics, but viewed in aggregate and with temporal distance, they begin to reveal architected patterns and structural logics. The database assigns IDs based on when incidents are processed, not when they occur, which means chronology often gives way to a different kind of sequencing, one that is shaped by recognition followed by verification, mixed with the rhythms of institutional attention. The Sudan cluster (1087–1089), for example, shows how AI-generated content can fill communicative voids left by collapsing media infrastructures and disrupted public discourse.

Exploitative and Harmful Content Generation

We saw continued documentation of AI systems being used to generate nonconsensual and sexually explicit material (1013–1014, 1040, 1048, 1078). Notably, several incidents involved minors or targeted vulnerable users, including chatbots encouraging self-harm and suicide (1026, 1031, 1041). Sextortion and sexual impersonation attacks remain transnational (e.g., 1014 in Malta; 1078 in Kentucky), while image-based manipulation (e.g., “nudify” tools exposed in Incident 1010) appears to be widespread and difficult to contain. The recurring theme across these cases is a breakdown in platform governance and safety, especially when users, including children, are able to easily create or encounter simulated harm without adequate guardrails. Incident 1040, for instance, involves the reported case of Meta’s user-created AI companions being implicated in sexually themed conversations with underage personas. These episodes challenge current understandings of responsibility, given the complex chains of authorship and system autonomy involved.

In these cases, AI systems are used to simulate forms of abuse that, despite being artificially generated, can feel immediate and convincing to those on the receiving end. In incidents involving minors, chatbots and user-created AI companions reportedly generated experiences that mimicked coercion or sexualization in ways that left users distressed and exploited. This is a structural problem. Platforms, as mentioned, increasingly enable forms of interaction that outpace existing guardrails. That in turn allows harmful outputs to surface in contexts that were never meaningfully designed for safety. The impersonality of the tools involved does not soften the impact; in many cases, it heightens it. They displace agency in ways that make redress harder to pursue. These incidents press on the limits of current responsibility models in that harm can be authored by no single actor and yet still wreak real devastation.

Institutional Failures, Legal Misuse, and Erroneous Epistemic Authority

April and May also saw the addition of incident IDs involving institutions (ranging from governments, courts, schools, and companies) reportedly mishandling or misjudging the integration of generative AI tools. Norwegian courts reportedly received fabricated citations in a legal filing (1027), while similar issues appeared in high-profile U.S. cases (1073, 1074). Damien Charlotin, through his diligent work with the AI Hallucination Cases Database, is contributing significantly to the field of AI incident tracking with his systematic documentation of LLM confabulations in court cases. In another adjacent example (1009), municipal officials in Tromsø used an AI-assisted report containing false citations to justify school closures. Some other examples include a case at Northeastern University involving the institutional and practical ambiguities surrounding a student complaint about a professor’s AI-assisted course materials (1071); a mass data exposure incident tied to an AI ticketing tool in the health sector (1070); and an AI-generated login policy hallucinated by a support bot, causing subscription cancellations (1039). Even federal agencies were implicated, with the “Make America Healthy Again” report (1084) released under HHS Secretary Robert F. Kennedy Jr. reportedly relying on unverifiable citations and potentially AI-generated misinformation.

The incidents described in this cluster are part of a growing institutional vulnerability that “technical failure” doesn’t quite capture; more accurately, it is epistemic erosion, a collapse in the mechanisms by which organizations vet and stand behind knowledge claims. The unifying variable in the various genre forms in which we are encountering these incidents (e.g., legal filings, policy decisions, educational disputes, and public communications) is not quite just “AI misuse,” per se, but a delegation of judgment to systems that simulate authority without possessing it. Whether it is in Tromsø’s school closure justification or HHS’s “Make America Healthy Again” report, institutions appear increasingly willing to anchor consequential decisions in outputs that are, at best, unverified and, at worst, fictitious. The performance of textual assertion attempts to stand in for reality. The risk here is also structural, not just reputational. When authorities across a wide range of sectors mistake fluency for truth, the result is not just error but a dilution of institutional legitimacy itself.

Scientific, Methodological, and Malware Harms

Similarly, this underdiscussed domain of AI-related harm also involves how knowledge itself is being distorted. Incident 1044 describes the emergence of "vegetative electron microscopy.” This term is a fake term that circulated through scientific literature due to contamination in training data. Other incidents include unauthorized AI studies over Reddit purportedly involving fabricated identities made by University of Zurich researchers (1043), AI-generated phishing platforms masquerading as research tools, such as Noodlophile (1080), and even speculative reporting on safety tool bypasses (1020, 1016). These cases speak to a quieter but altogether urgent crisis, which is the degradation of epistemic trust, particularly in professional or scientific domains. Even when harm is not directly personal or financial, the erosion of quality control in informational environments can cause cascading downstream effects. And that is particularly the case when institutions are slow to adapt or unable to verify the origin and validity of content that appears to come from credible sources.

Concluding Thoughts

Taken together, the incidents documented across April and May offer a portrait of how knowledge and authority may be compromised. Across sectors, we see institutions and individuals increasingly enmeshed in an environment in which synthetic fluency mimics epistemic legitimacy, and technical mediation displaces accountable authorship. It may behoove us to stretch our frameworks a bit beyond the logic of threat vectors and to think in terms of these new conditions of ambiguity. These are conditions in which harm may be authored by no single actor, or where perception alone can destabilize consensus. It is one where the infrastructures of truth are themselves strained by the tools they are meant to contain. It might be through impersonation scams that weaponize intimacy, or deepfakes that exploit institutional silence, or erroneous, fictional citations that underwrite consequential decisions, but the throughline is a pervasive instability in the link between language and reality. In such an ecosystem, as mentioned, the performance of textual assertion attempts to stand in for the labor of proof. And while the incidents cataloged here are individually bounded, they increasingly reveal systemic patterns of institutional unpreparedness and of ambient epistemic harm.

🗞️ New Incident IDs in the Database

  • Incident 1008 - AI-Generated Voice Purporting to Be Daughter Allegedly Used to Coerce $2,000 from Colorado Mother (2/10/2025)
  • Incident 1009 - AI-Assisted Report by Tromsø Officials Cited Non-Existent Sources in School Closure Proposal (2/13/2025)
  • Incident 1010 - GenNomis AI Database Reportedly Exposes Nearly 100,000 Deepfake and Nudify Images in Public Breach (3/31/2025)
  • Incident 1011 - Deepfake of Bermuda Premier David Burt Promotes Investment Scam Using Royal Gazette Branding (4/5/2025)
  • Incident 1012 - AI Voice Clone of Texas Woman Used in Distress Scam Targeting Brother in Port Neches (4/3/2025)
  • Incident 1013 - Essex Man Sentenced to Five Years in Prison for Having Generated and Shared Deepfake Pornography of at Least 20 Women and a Minor (4/4/2025)
  • Incident 1014 - Students in Maltese Schools Reportedly Being Targeted by Deepfake Nudes (10/1/2024)
  • Incident 1015 - Reported Darknet Launch of Xanthorox AI Introduces Autonomous Cyberattack Platform (4/7/2025)
  • Incident 1016 - Jailbroken Lovable AI Allegedly Used to Generate and Host Phishing Pages, Steal Credentials, and Bypass Security (4/9/2025)
  • Incident 1017 - Alleged Deepfake Investment Scam in Spain Defrauds 208 Victims of €19 million ($20.9 million) (4/7/2025)
  • Incident 1018 - OpenAI’s 4o Model Allegedly Used to Generate Fake Receipts and Prescriptions (3/31/2025)
  • Incident 1019 - Alleged AI-Generated Clone of Exante Brokerage Used to Defraud U.S. Investor via JPMorgan Account (4/10/2025)
  • Incident 1020 - Reportedly Unsafe Deployment of Llama.cpp Reveals Interactive AI-Generated CSAM Roleplay Prompts (4/11/2025)
  • Incident 1021 - Deepfake Job Applicant Allegedly Used AI Tools to Apply for Remote Role at U.S. Security Startup (4/8/2025)
  • Incident 1022 - Deepfake Video Reportedly Depicts Zambian President Hakainde Hichilema Withdrawing from 2026 Election (10/12/2023)
  • Incident 1023 - Purported AI-Cloned Voice Depicts Kenyan President William Ruto Speaking French in Viral Video (2/19/2024)
  • Incident 1024 - Kenyan Journalist Jeff Koinange Depicted Endorsing Gambling App in Purported AI-Generated Deepfake (4/24/2024)
  • Incident 1025 - Alleged Deepfake Video Depicts Former President of Kenya Uhuru Kenyatta Announcing 2027 Presidential Bid (3/18/2024)
  • Incident 1026 - Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session (4/12/2025)
  • Incident 1027 - Norwegian Supreme Court Receives Legal Filing with Fabricated Citations Allegedly Generated by AI Tool (4/10/2025)
  • Incident 1028 - OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol (2/7/2025)
  • Incident 1029 - Purported AI-Generated Audio Was Reportedly Used to Claim Recep Tayyip Erdoğan Expressed Support for Imran Khan in Pakistani Parliament (2/15/2025)
  • Incident 1030 - Aspiring Artist Cherelle Kozak Reportedly Targeted by AI-Powered Impersonation of Rapper Fat Joe (1/5/2025)
  • Incident 1031 - Transgender User Alleges ChatGPT Allowed Suicide Letter Without Crisis Intervention (4/19/2025)
  • Incident 1032 - Reported Doctored Video of Nigerian President Bola Tinubu Claims Naira Will Be Replaced by U.S. Dollar (8/29/2023)
  • Incident 1033 - Video Allegedly Altered by AI Reportedly Spreads Claim of Nigerian Doctor's Hypertension Cure (11/13/2023)
  • Incident 1034 - Reported AI-Manipulated Video Depicts Donald Trump Endorsing Peter Obi and Criticizing Bola Tinubu (8/4/2024)
  • Incident 1035 - Purported AI-Generated Video Depicts Trump Urging Release of Nigerian Separatist Leader Nnamdi Kanu (11/20/2024)
  • Incident 1036 - Purported AI-Manipulated News Clip Fabricates Explosion and Doctor's Murder Plot for Scam (1/20/2025)
  • Incident 1037 - Microsoft Reportedly Blocks 1.6 Million Bot Signup Attempts Per Hour Amid Global AI-Driven Fraud Surge (4/16/2025)
  • Incident 1038 - Scammers Reportedly Use AI Tools to Impersonate Students and Obtain Federal Aid (4/22/2025)
  • Incident 1039 - Anysphere AI Support Bot for Cursor Reportedly Invents Login Policy, Leading to Subscription Cancellations (4/19/2025)
  • Incident 1040 - Meta User-Created AI Companions Allegedly Implicated in Facilitating Sexually Themed Conversations Involving Underage Personas (4/26/2025)
  • Incident 1041 - Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech (1/21/2025)
  • Incident 1042 - Fact-Checking Finds Reportedly AI-Generated Video Misattributed Hypertension Cure Endorsements to Taiwo Ajai-Lycett and Chinonso Egemba (3/25/2025)
  • Incident 1043 - Reddit Moderators Report Unauthorized AI Study Involving Fabricated Identities by Purported University of Zurich Researchers (4/26/2025)
  • Incident 1044 - Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination (4/15/2025)
  • Incident 1045 - Mother in Louisville, Kentucky Describes Phone Scam Involving Purported AI-Generated Voice of Her Daughter (4/29/2025)
  • Incident 1046 - Brazil's Social Security AI Tool Is Allegedly Rejecting Complex Claims Improperly (4/24/2025)
  • Incident 1047 - Australian Analyst Allegedly Targeted by Scam Using Purportedly Sophisticated AI-Generated Corporate Materials (4/27/2025)
  • Incident 1048 - Tennessee Meteorologist's Likeness Reportedly Used in Sextortion Campaign Involving Purported AI-Generated Content (1/10/2025)
  • Incident 1049 - AI Tools Reportedly Used to Fabricate Image of 5,000-Naira Nigerian Banknote Featuring President Bola Tinubu (1/26/2025)
  • Incident 1050 - Reportedly AI-Generated Image Circulates Amid Reports of Tanzania Revenue Authority Job Interviews (3/31/2025)
  • Incident 1051 - Video Reportedly Created with AI Appears to Show Trump Backing Biafra Secession (2/18/2025)
  • Incident 1052 - Purported AI-Generated Video Reportedly Depicts Trump Criticizing Former Kenyan Deputy President Rigathi Gachagua (1/25/2025)
  • Incident 1053 - Mumbai Businessman Reportedly Defrauded via Purported AI-Cloned Voice Impersonating Son (3/30/2024)
  • Incident 1054 - Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development (4/23/2025)
  • Incident 1055 - FBI Reports AI Use by Threat Actors in Broader Cyber Context Including Infrastructure Intrusions (4/29/2025)
  • Incident 1056 - Purported AI-Generated Videos Impersonating President of Malta Myriam Spiteri Debono Circulate on Social Media in Alleged Crypto Scam Campaigns (2/12/2025)
  • Incident 1057 - "Jewish Baby Strollers" Provided Anti-Semitic Google Images, Allegedly Resulting from Hate Speech Campaign (8/15/2017)
  • Incident 1058 - Purported AI-Generated Deepfake Images of Katy Perry at 2024 Met Gala Circulate Widely (5/6/2024)
  • Incident 1059 - Second Consecutive Year of Alleged AI-Generated Images Depicting Katy Perry at Met Gala Circulating Online (5/6/2025)
  • Incident 1060 - Institute for Strategic Dialogue Reports Russian-Aligned Operation Overload Using Purported AI-Generated Impersonations Across January to March 2025 (5/6/2025)
  • Incident 1061 - Reportedly Viral USAID Disinformation Video Linked to Russian-Aligned Campaign Known as Matryoshka (2/7/2025)
  • Incident 1062 - Brazilian Authorities Link Alleged AI-Generated Marcos Mion Videos to Purported Fake Restaurant Promotions in Brazil (5/7/2025)
  • Incident 1063 - Purported AI-Manipulated Videos of Cypriot Officials Circulated in Alleged Investment Fraud (5/12/2025)
  • Incident 1064 - Alleged AI-Generated Scam Uses Bank of Cyprus Branding to Solicit Investments (11/14/2024)
  • Incident 1065 - Scammers Allegedly Use AI-Generated Avatars to Impersonate Friends in Houston, Texas and Solicit Money (9/23/2024)
  • Incident 1066 - Hong Kong Syndicate Allegedly Used AI-Generated Facial Composites to Open Bank Accounts (4/7/2025)
  • Incident 1067 - Alleged Use of Purported AI-Generated Identities to Defraud FTX Claims Buyers of $5.6M (2/18/2025)
  • Incident 1068 - AI-Powered Presentation Tool Gamma Implicated in Multi-Stage Phishing Campaign (4/15/2025)
  • Incident 1069 - Purported Graphite Spyware Linked to Paragon Solutions Allegedly Deployed Against Journalists and Civil Society Workers (1/31/2025)
  • Incident 1070 - Serviceaide AI Platform Implicated in Health Data Exposure Affecting 483,000 Catholic Health Patients (5/9/2025)
  • Incident 1071 - Student Reportedly Files Complaint Over Professor's Undisclosed Use of Generative AI at Northeastern University (5/14/2025)
  • Incident 1072 - Grok Chatbot Reportedly Inserted Content About South Africa and 'White Genocide' in Unrelated User Queries (5/14/2025)
  • Incident 1073 - $31,000 Sanction in Lacey v. State Farm Tied to Purportedly Undisclosed Use of LLMs and Erroneous Citations (4/15/2025)
  • Incident 1074 - Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel (5/15/2025)
  • Incident 1075 - New Orleans Police Reportedly Used Real-Time Facial Recognition Alerts Supplied by Project NOLA Despite Local Ordinance (5/19/2025)
  • Incident 1076 - Purported Unauthorized Deepfakes of Norman Swan and Others Circulated in Online Supplement Campaigns (5/21/2025)
  • Incident 1077 - FBI Reports Ongoing Vishing and Smishing Campaign Allegedly Targeting Government Officials Using Purportedly AI-Generated Voices (5/15/2025)
  • Incident 1078 - Purported AI-Generated Images Reportedly Used in Sextortion Scheme Targeting Glasgow, Kentucky Teen Who Later Died by Suicide (2/27/2025)
  • Incident 1079 - Purported AI-Manipulated Videos Depict Mauricio Macri Endorsing Manuel Adorni and Silvia Lospennato Withdrawing Before Buenos Aires Vote (5/17/2025)
  • Incident 1080 - Noodlophile Stealer Reportedly Distributed Through Allegedly Fraudulent AI Content Platforms (5/8/2025)
  • Incident 1081 - Voice Actor Alleges Unconsented Use of AI-Generated Voice on ScotRail Trains (5/27/2025)
  • Incident 1082 - California Homeowner Reportedly Loses Insurance After Purported Aerial Imagery-Based Roof Assessment (4/6/2024)
  • Incident 1083 - Texas Homeowner Reportedly Spent $3,000 to Contest AI-Flagged Warning of Insurance Nonrenewal (5/13/2025)
  • Incident 1084 - Federal 'Make America Healthy Again' Report Released with Multiple Reportedly Erroneous and Unverifiable Citations (5/22/2025)
  • Incident 1085 - Phone of White House Chief of Staff Susie Wiles Allegedly Breached in Reported AI Voice Impersonation Incident (5/30/2025)
  • Incident 1086 - Reported AI-Generated Video Purportedly Depicting Keanu Reeves Debating Elon Musk Circulates as Misinformation on Social Media (4/8/2025)
  • Incident 1087 - Campaign Featuring Purported AI-Generated Audio Attributed to Omar al-Bashir Spreads During Sudanese Conflict (8/20/2023)
  • Incident 1088 - Purported AI-Generated Audio Disinformation Reportedly Attributed to U.S. Ambassador John Godfrey Circulates in Sudan (4/15/2023)
  • Incident 1089 - Purported AI-Generated Content Circulates Widely in Sudan Amid Civil Conflict and Information Vacuum (10/23/2024)
  • Incident 1090 - Reported AI-Generated Clickbait Targets Adolescents in Rockingham County, North Carolina (5/30/2025)

👇 Diving Deeper

  • Check out the Table View and List View for different ways to see and sort all incidents.
  • Explore clusters of similar incidents in Spatial Visualization.
  • Learn about alleged developers, deployers, and harmed parties in Entities Page.

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook.
  2. Submit incidents to the database.
  3. Contribute to the database’s functionality.

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d414e0f