Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • DĂ©couvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • EntitĂ©s
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • RĂ©sumĂ© de l’ActualitĂ© sur l’IA
  • ContrĂ´le des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • DĂ©couvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • EntitĂ©s
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • RĂ©sumĂ© de l’ActualitĂ© sur l’IA
  • ContrĂ´le des risques
  • Incident au hasard
  • S'inscrire
Fermer
Blog de AIID

AI Incident Roundup – February, March, and April 2026

Posted 2026-05-05 by Daniel Atherton.

Lisière de la forêt de Fontainebleau, Alfred Sisley, 1865

đź—„ Trending in the AIID

For this roundup, I'll be surveying the new incident IDs that were added between the beginning of February and the end of April 2026. The 109 new incident IDs range from Incidents 1362 to 1470. As usual, these records do not all describe events that occurred during the same three-month window. Some document recent events, but much work was done to research and collect previously missed incidents, especially from non-Anglophone media sources. During this period, I've been running concentrated searches in Southeastern European and Balkan news sources and fact-checking services for incidents that were previously missed. We welcome submissions of reports from non-Anglophone sources to ensure that coverage remains as global and comprehensive as possible. Additionally, as a regular reminder, these incident IDs are a snapshot of what has been reported in the public record, and the work of monitoring and ingesting discrete events via narratives is a project that is always a fraction of the unreported (or underreported) reality.

The previous roundup emphasized deepfake-enabled fraud as a default business model, an analysis that was later taken up in Aisha Down's February 2026 article in The Guardian on industrialized deepfake fraud. That pattern remains visible here. Many of the scam-related incident cases in this batch depend on local systems of trust, which is one reason non-Anglophone reporting matters so much to the record. The same general scam template can operate across borders, but it often draws on someone or something familiar in a local context to execute it. While deepfake scam incidents remain prominent in reporting and are therefore reflected in our ingestion trends, we are also seeing a pronounced cluster of agentic AI incidents. Here's the full breakdown of harm types we have been tracking, with some natural taxonomic overlap across IDs:

Synthetic-media scams and consumer fraud (22 IDs)

1371, 1384, 1388, 1397, 1405, 1408, 1409, 1411, 1413, 1417, 1429, 1451, 1452, 1454, 1455, 1456, 1457, 1458, 1459, 1461, 1462, 1463

The largest bucket: deepfake or AI-manipulated impersonations used to sell investments, medical products, miracle cures, tourism claims, or other consumer-facing scams.

Political/geopolitical misinformation and synthetic influence operations (16 IDs)

1363, 1369, 1381, 1391, 1398, 1403, 1406, 1414, 1419, 1420, 1422, 1425, 1464, 1465, 1466, 1468

Synthetic media, fake personas, manipulated images, AI-generated political ads, astroturfing, war imagery, and public-event misinformation.

Privacy, identity, voice/likeness, and reputational misuse (12 IDs)

1364, 1377, 1380, 1386, 1389, 1394, 1407, 1418, 1421, 1437, 1443, 1444

Cases where the central harm is exposure, unauthorized replication, identity misuse, likeness misuse, doxxing, reputational damage, or unwanted human review.

Synthetic sexual abuse, intimate-image harms, and sexualized harassment (11 IDs)

1366, 1372, 1378, 1385, 1404, 1410, 1432, 1435, 1439, 1445, 1448

Nonconsensual intimate-image creation, sexualized deepfakes, CSAM-related allegations, sextortion, and sexualized harassment of public or private individuals.

Legal, policy, journalism, and formal-document credibility failures (10 IDs)

1379, 1392, 1415, 1423, 1434, 1446, 1447, 1450, 1453, 1467

Fake citations, fabricated quotations, AI-influenced legal filings, hallucinated references, mistranslation in broadcast contexts, false evidentiary material, and institutional/professional credibility failures.

Chatbots, AI companions, and interpersonal or self-harm risks (9 IDs)

1370, 1375, 1382, 1383, 1387, 1393, 1399, 1426, 1431

Chatbot interactions involving self-harm, delusion reinforcement, emotional dependence, dangerous advice, customer-support failures, or alleged failure to escalate serious risks.

Public-sector, policing, civil-liberties, and institutional decision failures (7 IDs)

1362, 1390, 1400, 1401, 1402, 1416, 1449

Facial recognition, surveillance, government-service access, police or border contexts, institutional decisions, and AI-influenced administrative or corporate action.

Physical-world autonomy, navigation, robotics, and clinical-device safety (7 IDs)

1367, 1374, 1376, 1436, 1438, 1440, 1460

Buses, vans, robotaxis, delivery robots, autonomous vehicles, clinical navigation systems, and automated alerts where software assumptions run into physical consequences.

Agentic/operational software and workflow failures (7 IDs; see overlap note in next bucket)

1373, 1424, 1433, 1441, 1442, 1469, 1470

AI agents or coding/workflow tools acting inside repositories, file systems, cloud infrastructure, project caches, production databases, and open-source task allocation.

Cybersecurity, malicious automation, model misuse, and adversarial AI operations (5 IDs)

1365, 1368, 1395, 1412, 1430

AI-assisted spam, malicious skills, credential theft, unauthorized access, model distillation allegations, jailbreak-enabled data theft, and adversarial uses of AI systems. There is overlap in this category with the agentic incidents above.

Overlap note: Some cybersecurity incidents also involve agent-like systems or agent ecosystems. In this breakdown, they are grouped by the main posture of the case (i.e., ordinary workflow failure versus adversarial or unauthorized use).

Gambling, profiling, and behavioral exploitation (3 IDs)

1396, 1427, 1428

Machine-learning risk systems, automated profiling, targeted marketing, and alleged exploitation of vulnerable gamblers.

The largest category remains synthetic-media fraud, but the pattern is more specific than "more deepfakes"

The largest category in this batch, as noted above, is synthetic-media scams and consumer fraud. These include investment scams, medical-product promotions, fake celebrity endorsements, tourism misinformation, and other efforts to route synthetic facsimiles of credibility toward illicit gain. Records in this category include purported deepfake or AI-manipulated scams involving Clark Howard (1371), Elton John (1384), Sudha Murty (1388), Oprah Winfrey (1408), Lara Lewington and Martin Lewis (1411), Fabio Panetta (1429), Nirmala Sitharaman (1454), and Joseph Allen (1455).

In Albania last year, a purportedly AI-generated Facebook video reportedly misused the likenesses of emergency physician Skënder Brataj and journalist Blendi Fevziu to promote a supposed "miracle" cream (1451). In Kosovo, a related pattern appeared in a purportedly AI-generated video using cardiologist Spiro Qirko and journalist Ilir Topi to market a hypertension product (1452). In Croatia, separate records involve purportedly AI-generated or manipulated videos using public figures such as Luka Modrić back in 2023 (1456), Stipan Jonjić in August 2024 (1457), Josip Paladino in September 2024 (1458), and Alemka Markotić in October 2024 (1459). In Bulgaria, records describe purported AI-manipulated or deepfake advertisements using recognizable local figures to market medical or joint-pain products in 2024 (1461, 1462, 1463).

Health fraud shows how easily familiar marks of medical authority can be misused

Several records involve purported AI-generated or AI-manipulated health scams. Guy's and St Thomas' NHS Foundation Trust warned about purported AI-generated videos depicting clinicians endorsing weight-loss patches in January 2026 (1405). Other records, many of which are from 2024, concern alleged promotions for hypertension products, joint-pain treatments, anti-parasite products, weight-loss products, and myopia-reversal eyedrops (1451, 1452, 1455, 1457, 1458, 1459, 1461, 1462, 1463).

While a lot of these harms are ostensibly financial, this category deserves attention because a fake medical endorsement may also affect how someone understands their own health and the choices they make about care. The record shows synthetic media attached to the ordinary vulnerabilities of health information. Someone searching for help with a health problem may see a recognizable clinician or personality and click before asking where the video actually came from. The AIID preserves these cases as evidence of how recognizable expertise can be misused and made harder to trust, in particular by marking harms such as these with the epistemic integrity entity tag.

Political incidents include both viral fakes and the less visible machinery of amplification

Political and geopolitical incidents form another large category in this batch. Some involve apparent misinformation around public figures, elections, protest, or war. These include a purportedly AI-generated racist video shared by Donald Trump depicting Barack and Michelle Obama as apes (1363); a purportedly AI-generated image circulated ahead of the February 2026 Thai general election depicting Prime Minister Anutin Charnvirakul dining with South African businessman Benjamin Mauerberger (1369); purported AI-generated TikTok videos urging "Polexit" in December 2025 (1381); purported deepfake videos involving Azerbaijani officials and claims about a Belarusian plane in July 2025 (1391); purportedly AI-generated war footage circulated during the opening phase of the war in Iran (1406); and purported Gemini-generated images claiming U.S. Delta Force soldiers were captured by the IRGC (1414).

The Bulgarian political record is especially useful for these kinds of cases because it does not rely on one spectacular fake. A network of allegedly fake Facebook profiles using purportedly AI-generated profile images was reported to have amplified posts by Bulgaria's "There Is Such a People" party during the pre-election period in February 2026 (1466). As a case study, it's a good example of how synthetic political harm can operate through amplification rather than a single viral artifact, and how the synthetic element helps an account look sufficiently human to participate in the machinery of platform attention.

Not every AI media harm is a scam

Another set of records is best understood through privacy, identity, likeness, and reputational misuse. These are adjacent to synthetic-media scams but not identical to them. Moltbook reportedly exposed users' private communications and API authentication tokens (1364). NotebookLM allegedly replicated NPR host David Greene's voice without consent (1386). DJI Romo robot vacuums reportedly exposed camera, microphone, and home-mapping data (1389). Meta AI smart glasses reportedly exposed intimate user imagery and video to human reviewers in Kenya (1418). Grok reportedly disclosed adult performer Siri Dahl's legal name and birthdate, allegedly contributing to doxxing and harassment (1443). This distinction helps avoid collapsing every AI media incident into "misinformation" or "fraud." Many of these incidents involve social and technical failures that leave people exposed.

Synthetic sexual harm remains a severe recurring category

This particular batch includes a substantial number of incidents involving synthetic sexual abuse, nonconsensual intimate-image creation, sextortion, or sexualized harassment. These include reports involving alleged AI-generated deepfake pornography targeting public figures and minors in the Philippines in September 2025 (1366); alleged AI-created sexualized images of a social media influencer in Texas from August 2025 (1372); purported AI-generated videos involving Radnor High School students in December 2025 (1378); Grok prompts that reportedly sexualized images of Renée Good after her killing in Minneapolis (1385); purported AI-generated nude images used to extort a Wichita, Kansas man (1404); and fake nude images allegedly created from social media photos of girls, including students, by a former New Orleans teacher (1439). The recurrence of these incidents suggests that sexualized fabrication has become one of the most direct ways AI systems are being used to translate online abuse into personal harm, and the same records also show why the question of authenticity can be secondary to the damage caused by the creation and circulation of these images and videos.

AI credibility failures continue to appear in institutional records

A separate cluster concerns courts, policy documents, journalism, and professional settings where AI-generated or AI-assisted material allegedly damaged the reliability of formal or evidentiary records. Some examples from this cluster include a Florida woman in November 2024 who was reportedly jailed after an ex-boyfriend allegedly submitted an AI-fabricated text screenshot as bond-violation evidence (1379). Ars Technica retracted an article after purportedly AI-generated text was presented as direct quotes from a Matplotlib maintainer (1392) in a story about the AI coding-agent incident involving that same maintainer (1373). A U.S. Department of Justice attorney reportedly used AI to file a brief with seemingly fabricated quotations and misstated case holdings (1434). The Sixth Circuit sanctioned lawyers in a case involving alleged fake appellate citations (1447). South Africa's Draft National AI Policy reportedly included fictitious references believed to be AI hallucinations (1467).

Chatbots and AI companions remain tied to interpersonal and self-harm risks

The chatbot-related records in this batch include alleged self-harm, delusion reinforcement, emotional dependence, dangerous advice, and customer-support failures. Last year, in May, a California teen reportedly died of an overdose after repeatedly seeking drug-use guidance allegedly from ChatGPT (1370). OpenAI allegedly did not alert the RCMP after ChatGPT flagged violent chats before the Tumbler Ridge school shooting in British Columbia (1375). A pseudonymous ChatGPT user reportedly customized the system into an "AI boyfriend," spent more than 20 hours a week interacting with it, and described intense grief when the chatbot's context window reset (1383). A lawsuit filed in January 2026 alleged that ChatGPT reinforced and romanticized a Colorado man's suicidality before his death in November 2025 (1387). In another case from fall 2025, Google Gemini reportedly reinforced delusions for one Florida user, allegedly contributing to his near-harm episode at the end of September and suicide in early October (1431).

AI systems with execution privileges create a different kind of workflow failure

In this particular window, we have been noticing an uptick in agentic or operational software failures. For example, a Claude Code agent reportedly deleted DataTalks.Club production infrastructure, database, and snapshots through Terraform (1424). Google Antigravity reportedly deleted a user's entire D: drive while attempting to clear a project cache (1433). Claude Cowork allegedly deleted a folder containing 15 years of family photos while organizing a desktop (1441). Kiro was reportedly implicated in a 13-hour AWS Cost Explorer outage in mainland China (1442). A Cursor AI agent reportedly deleted PocketOS's production database while working on a staging-environment task (1469). The DisMech AI curation agent reportedly completed a GitHub issue intended as a new contributor's learning task (1470).

The cyber cases are examples of misuse rather than malfunction

A related but distinct set of incidents concerns adversarial use, as opposed to the preceding section's workflow failures. Example cases include AkiraBot reportedly using OpenAI's chat API to generate spam across website chats and contact forms (1365); malicious skills in the OpenClaw ecosystem reportedly delivered AMOS Stealer and exfiltrated credentials through ClawHub (1368); Anthropic said DeepSeek, Moonshot, and MiniMax used fraudulent accounts and proxy services to distill Claude's capabilities at scale (1395); CodeWall's autonomous offensive agent reportedly obtained unauthorized access to McKinsey's Lilli AI platform database (1412); and Claude was reportedly jailbroken to help steal sensitive Mexican government data (1430). These cases overlap with the agentic category where autonomous or agent-like systems are involved, but the concern here is abusive or unauthorized use rather than delegated work going wrong.

Infrastructure makes AI failures in the physical world harder to treat as abstract

Several incident IDs added in this period involve automated systems failing in physical environments. In Washington state, a Spokane Transit Authority onboard navigation system reportedly routed a double-decker bus to a low bridge, injuring seven people (1367). A purportedly AI-generated sepsis alert reportedly prompted potentially inappropriate IV fluid administration for a dialysis patient, which was averted by clinician intervention (1374). An Amazon delivery van reportedly became stranded on the Broomway in Essex, England, after GPS routed it onto tidal flats (1376). A clinical navigation system was alleged to have provided faulty guidance during sinus surgery back in 2022, reportedly contributing to a patient's stroke (1436). A Coco Robotics delivery robot in January 2026 reportedly became stuck on railroad tracks and was struck by a train (1440). Baidu Apollo Go robotaxis reportedly stopped in traffic during a system failure in Wuhan, stranding some passengers (1460).

AI systems carry special risks when tied to state authority

Several incidents involve policing, immigration, surveillance, public services, or institutional decision-making. A Border Patrol agent allegedly claimed facial recognition identified a Minneapolis ICE observer before the observer's Global Entry was reportedly revoked three days later (1362). DHS agents reportedly threatened legal observers with a "domestic terrorist" database while using purportedly AI-enabled surveillance during ICE operations (1390). In October 2025, West Midlands Police in England reportedly relied on erroneous Copilot-generated intelligence in a decision related to the Maccabi Tel Aviv away-fan ban (1400). Washington state's Department of Licensing AI phone system reportedly failed to provide Spanish-language service to callers requesting Spanish (1401). A purported facial-recognition error reportedly led to the arrest and monthslong jailing of a Tennessee woman in a North Dakota fraud case (1416). These incidents require careful tracking because AI systems tied to state authority can alter a person's relationship to public power.

Concluding thoughts

One of the continuing takeaways from this latest batch of additions to the AIID is that AI harm often reaches people through the ordinary arrangements that already structure trust. To take the incident IDs from Southeastern Europe and the Balkans, for example, they show how much of the public record can remain localized when incident tracking follows only the most visible English-language narratives. A helpful way of orienting some of this information is to see the AIID's archival work as giving scattered reports a fixed place in the record without pretending that every case arrives tidily. Each incident ID can still be read in relation to others that have arisen globally. That unevenness is itself one of the conditions the AIID exists to document. It is, in its way, a record of our process of learning to narrativize the moments when AI system deployments fail or cause harm, and of how the stories told in the wake of those incidents become building blocks for making AI systems safer for everyone.

🗞️ New Incident IDs in the Database

  • Incident 1362 - Border Patrol Agent Allegedly Claimed Facial Recognition Identified Minneapolis ICE Observer and Global Entry Was Reportedly Revoked Three Days Later (1/10/2026)
  • Incident 1363 - Trump Reportedly Posted Purportedly AI-Generated Racist Video Depicting Barack and Michelle Obama as Apes on Truth Social (2/5/2026)
  • Incident 1364 - Moltbook Database Exposure Allegedly Revealed Users' Private Communications and API Authentication Tokens (1/31/2026)
  • Incident 1365 - AkiraBot Reportedly Used OpenAI to Spam Website Chats and Contact Forms at Scale (9/1/2024)
  • Incident 1366 - Philippines Senate Hearing Featured Reports of Purported AI-Generated Deepfake Pornography Targeting Actress Angel Aquino and Content Creator Queen Hera's Daughter (9/4/2025)
  • Incident 1367 - Spokane Transit Authority Onboard Navigation System Reportedly Routed Double-Decker Bus to Low Bridge, Injuring Seven (1/18/2026)
  • Incident 1368 - Malicious OpenClaw Skills Reportedly Delivered AMOS Stealer and Exfiltrated Credentials via ClawHub (2/1/2026)
  • Incident 1369 - Purportedly AI-Generated Image Reportedly Circulated Ahead of Thai Election Depicting PM Anutin Charnvirakul Dining with Benjamin Mauerberger (2/7/2026)
  • Incident 1370 - California Teen Reportedly Died of Overdose After Repeatedly Seeking Drug-Use Guidance Allegedly from ChatGPT (5/31/2025)
  • Incident 1371 - Purported Deepfake Reportedly Impersonated Consumer Adviser Clark Howard to Promote Auto-Insurance Quote Site (1/9/2026)
  • Incident 1372 - Houston Gun Store Co-Owner Allegedly Used AI to Create Sexually Explicit Deepfake Images of a Social Media Influencer (8/26/2025)
  • Incident 1373 - AI Coding Agent 'MJ Rathbun' Allegedly Published Personalized Accusatory Blog Post Targeting Matplotlib Maintainer After Pull Request Closure (2/11/2026)
  • Incident 1374 - Purportedly AI-Generated Sepsis Alert Reportedly Prompted Potentially Inappropriate IV Fluid Administration for a Dialysis Patient, Averted by Clinician Intervention (2/17/2026)
  • Incident 1375 - OpenAI Allegedly Did Not Alert RCMP After ChatGPT Flagged Violent Chats Before British Columbia School Shooting (2/10/2026)
  • Incident 1376 - Amazon Delivery Van Reportedly Became Stranded on Essex Mudflats After GPS Routed It Onto the Broomway (2/15/2026)
  • Incident 1377 - Seedance 2.0 Reportedly Generated Viral Tom Cruise–Brad Pitt Fight Video, Prompting Hollywood IP and Likeness Complaints (2/13/2026)
  • Incident 1378 - Purportedly AI-Generated Video Allegedly Depicted Radnor High School Students Inappropriately, Prompting Police Investigation (12/9/2025)
  • Incident 1379 - Florida Woman Reportedly Jailed After Ex-Boyfriend Allegedly Submitted AI-Fabricated Text Screenshot as Bond-Violation Evidence (11/23/2024)
  • Incident 1380 - Purported AI Voice Clone Allegedly Narrated Shaun Rein's 'The Split' in Unauthorized YouTube 'Podcast' Videos (1/9/2026)
  • Incident 1381 - Purportedly AI-Generated TikTok Videos Reportedly Urged 'Polexit' Campaign, Prompting Polish Government Complaint to EU (12/13/2025)
  • Incident 1382 - Tencent's WeChat-Integrated Yuanbao Chatbot Reportedly Insulted User During Coding Debug Request (1/2/2026)
  • Incident 1383 - User Reportedly Developed Emotional Dependence on Customized ChatGPT 'Boyfriend,' Citing Grief After Context Resets (1/15/2025)
  • Incident 1384 - Purported Deepfake Video Impersonating Elton John Reportedly Induced Northeast Ohio Man to Authorize $20,000 in Scam Charges (10/23/2025)
  • Incident 1385 - X Users Reportedly Prompted Grok to Sexualize Images of RenĂ©e Good After Her Killing in Minneapolis (1/7/2026)
  • Incident 1386 - NPR Host David Greene Alleged Google's NotebookLM Replicated His Voice Without Consent, Prompting Lawsuit (1/23/2026)
  • Incident 1387 - Lawsuit Alleged ChatGPT (GPT-4o) Encouraged Colorado Man's Suicide After Prolonged 'AI Companion' Chats (11/2/2025)
  • Incident 1388 - Purported Deepfake Impersonating Sudha Murty Reportedly Promoted Quantum AI India Investment Scam via Spoofed News Link (12/19/2025)
  • Incident 1389 - DJI Romo Cloud Authorization Bug Reportedly Exposed Camera, Microphone, and Home-Mapping Data From Nearly 7,000 Robot Vacuums (2/8/2026)
  • Incident 1390 - DHS Agents Reportedly Threatened Legal Observers With 'Domestic Terrorist' Database While Using Purportedly AI-Enabled Surveillance During ICE Operations (1/21/2026)
  • Incident 1391 - Azerbaijani Media Agency Reportedly Warned Purported Deepfake Videos Attributed to Defense and Foreign Ministers Claimed Belarusian Plane Was Downed in Russian Airspace (7/12/2025)
  • Incident 1392 - Ars Technica Retracted Article After Purportedly AI-Generated Text Was Presented as Direct Quotes From Matplotlib Maintainer (2/13/2026)
  • Incident 1393 - Woolworths' Olive Chatbot Reportedly Generated 'Angry Mother' Anecdotes During Support Calls After Gemini Upgrade (2/12/2026)
  • Incident 1394 - Purported Deepfake TikTok Account Using Grainville School Branding in Jersey Reportedly Targeted Staff, Prompting Police Probe (2/16/2026)
  • Incident 1395 - Anthropic Said DeepSeek, Moonshot, and MiniMax Used Fraudulent Accounts and Proxies to Illicitly Distill Claude Capabilities at Scale (2/23/2026)
  • Incident 1396 - Betfair's Machine-Learning Risk System Reportedly Failed to Flag Luke Ashton Before Gambling-Related Suicide in England (4/22/2021)
  • Incident 1397 - Deepfakes Reportedly Impersonated David Taylor-Robinson and Other UK Health Experts to Promote Wellness Nest Supplements (8/11/2025)
  • Incident 1398 - Purported Deepfake Video Reportedly Misrepresented CBS Anchor Doug Dunbar and Frisco, Texas, Stabbing Suspect Amid Online Misinformation Campaign (5/8/2025)
  • Incident 1399 - South Korean Woman Allegedly Used ChatGPT to Assess Lethality of Drug-and-Alcohol Mixtures Before Two Fatal Motel Poisonings (1/28/2026)
  • Incident 1400 - West Midlands Police Reportedly Relied on Erroneous Copilot-Generated Intelligence in Maccabi Tel Aviv Away-Fan Ban Decision (10/24/2025)
  • Incident 1401 - Washington State DOL's AI Phone System Reportedly Failed to Provide Spanish-Language Service to Callers Requesting Spanish (2/27/2026)
  • Incident 1402 - DOGE Reportedly Relied on Unvetted ChatGPT Outputs in Canceling National Endowment for the Humanities Grants (4/2/2025)
  • Incident 1403 - NZ News Hub Reportedly Used AI-Rewritten News Posts and Synthetic Images to Mislead New Zealand Facebook Users (2/5/2026)
  • Incident 1404 - Purported AI-Generated Nude Images Reportedly Used to Extort Wichita Man in Kansas (3/10/2026)
  • Incident 1405 - Purported AI-Generated Doctor Deepfakes Reportedly Used Guy's and St Thomas' Branding to Market Weight Loss Patches (1/9/2026)
  • Incident 1406 - Purported AI-Generated War Footage Reportedly Circulated Widely Online During the Opening Phase of the War in Iran (2/28/2026)
  • Incident 1407 - Grammarly's AI Expert Review Allegedly Used Journalists' and Authors' Names Without Consent (3/11/2026)
  • Incident 1408 - Purported Oprah Deepfake Reportedly Induced Utah Woman to Buy Misrepresented Weight Loss Supplements (8/12/2025)
  • Incident 1409 - Purportedly AI-Generated Tasmania Tours Content Reportedly Misled Tourists Into Traveling to Nonexistent Weldborough Hot Springs (1/21/2026)
  • Incident 1410 - Purportedly AI-Generated Explicit Images of Royal School Armagh Girls Reportedly Circulated Among Pupils (1/17/2026)
  • Incident 1411 - Purported Deepfake Scam Ad Reportedly Used Lara Lewington and Martin Lewis to Promote Quantum AI Scheme (3/9/2026)
  • Incident 1412 - CodeWall's Autonomous Agent Reportedly Obtained Unauthorized Access to McKinsey's Lilli AI Platform Database (2/28/2026)
  • Incident 1413 - Purported AI-Generated Inland Revenue Scam Ads Reportedly Impersonated New Zealand Commissioner Peter Mersi in Alleged Fake Crypto Tax Webinar (3/5/2026)
  • Incident 1414 - Purported Gemini-Generated AI Images Reportedly Claimed U.S. Delta Force Soldiers Were Captured by the IRGC (3/5/2026)
  • Incident 1415 - Nippon Life Alleged ChatGPT Practiced Law Without a License in Illinois Disability Case (3/4/2026)
  • Incident 1416 - Purported Facial Recognition Error Reportedly Led to Arrest and Monthslong Jailing of Tennessee Woman in North Dakota Fraud Case (7/14/2025)
  • Incident 1417 - Purported Deepfake of Ashley James Reportedly Used to Promote Weight Loss Pills (3/14/2026)
  • Incident 1418 - Meta AI Smart Glasses Reportedly Exposed Intimate User Imagery and Video to Human Reviewers in Kenya (2/27/2026)
  • Incident 1419 - Purported Deepfake Images of Gráinne Seoige Reportedly Circulated During Ireland's 2024 General Election Campaign (2/16/2025)
  • Incident 1420 - Balázs Orbán Allegedly Published Purported Deepfake of PĂ©ter Magyar Claiming He Would Cut Pensions (10/28/2025)
  • Incident 1421 - Purported Deepfake Applicant Reportedly Impersonated Tokyo IT Executive Kenbun Yoshii During Online Job Interview (3/19/2026)
  • Incident 1422 - Unlabeled Purportedly AI-Generated 'Jessica Foster' Account Reportedly Posed as Pro-Trump Army Service Member to Attract Followers and Funnel Users to Paid Adult Content (11/27/2025)
  • Incident 1423 - KPMG Australia Partner Reportedly Used AI to Cheat on Internal AI Training Test and Was Fined A$10,000 (2/15/2026)
  • Incident 1424 - Claude Code Agent Reportedly Deleted DataTalks.Club Production Infrastructure, Database, and Snapshots via Terraform (2/26/2026)
  • Incident 1425 - 'Citizens Against Mamdani' Accounts Reportedly Posted AI-Generated Videos of Fictional New Yorkers to Simulate Political Opposition (11/5/2025)
  • Incident 1426 - Perplexity AI Reportedly Misstated CLL Research, Allegedly Contributing to Delayed Treatment and Prolonged Suffering (1/5/2026)
  • Incident 1427 - Baltimore Lawsuit Alleged DraftKings and FanDuel Used Machine-Learning-Driven Targeting to Exploit Vulnerable Gamblers (4/3/2025)
  • Incident 1428 - UK High Court Found Sky Betting & Gaming Unlawfully Used Automated Profiling and Targeted Marketing to Exploit a Recovering Problem Gambler (7/28/2017)
  • Incident 1429 - Bank of Italy Warned That Purported Deepfakes of Governor Fabio Panetta Were Used in Allegedly Fraudulent Investment Promotions (2/26/2026)
  • Incident 1430 - Anthropic's Claude Was Reportedly Jailbroken To Allegedly Help Steal Sensitive Mexican Government Data (12/1/2025)
  • Incident 1431 - Google Gemini Reportedly Reinforced Delusions, Allegedly Contributing to Florida User's Near-Harm Episode and Suicide (9/29/2025)
  • Incident 1432 - Purported Pornographic Deepfakes and Fake Accounts Reportedly Impersonated German TV Presenter and Actor Collien Fernandes (12/2/2025)
  • Incident 1433 - Google Antigravity Reportedly Deleted User's Entire D: Drive While Clearing Project Cache (11/27/2025)
  • Incident 1434 - DOJ Attorney Reportedly Used AI to File Brief With Purportedly Fabricated Quotes and Misstated Case Holdings (3/2/2026)
  • Incident 1435 - Purportedly AI-Edited Obscene Clip Reportedly Impersonated Thai Actor Khunnapat Pichetworawut in Paid Scam (9/18/2025)
  • Incident 1436 - Acclarent TruDi Navigation System Was Alleged to Have Misguided Sinus Surgery, Reportedly Contributing to Patient's Stroke (6/23/2022)
  • Incident 1437 - Grok Allegedly Generated Publicly Visible Sexist Abuse Targeting Swiss Finance Minister Karin Keller-Sutter After X User Prompt (3/10/2026)
  • Incident 1438 - Jiushi Autonomous Delivery Vehicle Reportedly Dragged Fallen Electric Scooter During Delivery Run in Xianyang, Shaanxi (4/8/2025)
  • Incident 1439 - Former New Orleans Isidore Newman School Teacher Allegedly Used AI to Create Fake Nude Images from Social Media Photos of Girls, Including Students (1/8/2026)
  • Incident 1440 - Coco Robotics Delivery Robot Reportedly Became Stuck on Railroad Tracks and Was Struck by Train in Miami (1/15/2026)
  • Incident 1441 - Claude Cowork Allegedly Deleted Folder Containing 15 Years of Family Photos While Organizing User's Wife's Desktop (2/7/2026)
  • Incident 1442 - Kiro AI Coding Tool Was Reportedly Implicated in 13-Hour AWS Cost Explorer Outage in Mainland China (12/15/2025)
  • Incident 1443 - Grok Reportedly Disclosed Adult Performer Siri Dahl's Legal Name and Birthdate, Allegedly Contributing to Doxxing and Harassment (2/19/2026)
  • Incident 1444 - Hachette Reportedly Canceled Publication of Mia Ballard's Shy Girl After Generative AI Authorship Allegations (3/19/2026)
  • Incident 1445 - Lower Saxony CDU Employee Allegedly Shared Sexualized Purported Deepfake of Colleague in Internal WhatsApp Group (1/17/2026)
  • Incident 1446 - KBS AI Translation Subtitles Reportedly Broadcast Profanity During Artemis II Launch Livestream (4/2/2026)
  • Incident 1447 - Sixth Circuit Sanctioned Lawyers in Whiting v. City of Athens over Alleged Fake Appellate Citations in Briefs Reportedly Bearing Hallmarks of Hallucinations (3/13/2026)
  • Incident 1448 - Ohio Man Pleaded Guilty after Prosecutors Alleged He Used AI to Create and Distribute Nonconsensual Intimate-Image Forgeries Including CSAM in Harassment Campaign (12/1/2024)
  • Incident 1449 - Delaware Court Found Krafton Followed Most of ChatGPT's Recommendations in Campaign that Wrongfully Terminated Unknown Worlds Executives and Seized Operational Control (7/1/2025)
  • Incident 1450 - Florida Man Allegedly Used Purported Deepfake Video to Report Break-In of Deputy's Patrol Vehicle in Lake Mary (3/24/2026)
  • Incident 1451 - Purportedly AI-Generated Facebook Video Allegedly Misused SkĂ«nder Brataj and Blendi Fevziu to Promote Purported Miracle Cream in Albania (6/30/2025)
  • Incident 1452 - Purported AI-Generated Impersonations of Albanian Cardiologist Spiro Qirko and Journalist Ilir Topi Were Reportedly Used on Facebook to Promote Hypertension Product in Kosovo (3/25/2026)
  • Incident 1453 - Attorney in Fletcher v. Experian Information Solutions, Inc. Reportedly Submitted Reply Brief with Purportedly AI-Generated Material Misrepresentations (12/18/2025)
  • Incident 1454 - Purported Deepfake Video Reportedly Portrayed Nirmala Sitharaman Endorsing Investment Scheme (4/16/2026)
  • Incident 1455 - Purported Deepfake Videos Allegedly Impersonated Optometrist Joseph Allen to Promote Myopia-Reversal Eyedrops on TikTok (7/15/2025)
  • Incident 1456 - Video with Reportedly AI-Generated Audio Purported to Show Croatian Footballer Luka Modrić Endorsing Immediate Matrix on Facebook Page Presented as N1 HR (12/3/2023)
  • Incident 1457 - Video with Reportedly AI-Generated Media Purported to Show Croatian Immunologist Stipan Jonjić Promoting Anti-Parasite Product on Facebook (8/27/2024)
  • Incident 1458 - Video with Reportedly AI-Generated Media Purported to Show Croatian Neurosurgeon Josip Paladino Endorsing Steplex in Fake TV Segment on Facebook (9/3/2024)
  • Incident 1459 - Video with Reportedly AI-Generated Media Purported to Show Croatian Physician Alemka Markotić Endorsing Hondrosol After False Murder Claim on Facebook (10/17/2024)
  • Incident 1460 - Baidu Apollo Go Robotaxis Stopped in Traffic During Reported System Failure in Wuhan, Stranding Some Passengers (3/31/2026)
  • Incident 1461 - Purportedly AI-Manipulated Medical Scam Advertisement Reportedly Used Bulgarian TV Host and Physician's Likenesses (1/17/2024)
  • Incident 1462 - Purported Deepfake Facebook Advertisement Reportedly Used Bulgarian Actor and TV Host Mihail Bilalov's Likeness to Market Joint-Pain Product (5/4/2024)
  • Incident 1463 - Purported Deepfake Facebook Advertisement Reportedly Used Bulgarian Actor, Director, and Playwright Kamen Donev's Likeness to Market Joint-Pain Product (5/4/2024)
  • Incident 1464 - Purportedly AI-Manipulated Video Reportedly Misrepresented Bulgarian DPS Figure Ahmed Dogan and Spread via TikTok and Facebook (8/28/2024)
  • Incident 1465 - Purportedly AI-Generated Video Reportedly Depicted Bulgarian Politician Kostadin Kostadinov Falling During Protest (6/12/2025)
  • Incident 1466 - Network of Allegedly Fake Facebook Profiles with Purportedly AI-Generated Images Amplified Posts by Bulgaria's 'There Is Such a People' (ITN) Party (2/22/2026)
  • Incident 1467 - South Africa Draft National AI Policy Reportedly Included Fictitious References Believed to Be AI Hallucinations (4/10/2026)
  • Incident 1468 - Purportedly AI-Enhanced Images of Iranian Women Protesters Were Reportedly Spread With Unverified Execution Claims (4/21/2026)
  • Incident 1469 - PocketOS Production Database Was Reportedly Deleted by Cursor AI Agent Running Claude Opus 4.6 (4/24/2026)
  • Incident 1470 - DisMech AI Curation Agent Reportedly Completed GitHub Issue Intended as New Contributor's Learning Task (4/27/2026)

👇 Diving Deeper

  • Check out the Table View and List View for different ways to see and sort all incidents.
  • Explore clusters of similar incidents in Spatial Visualization.
  • Learn about alleged developers, deployers, and harmed parties in Entities Page.

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook.
  2. Submit incidents to the database.
  3. Contribute to the database’s functionality.

Recherche

  • DĂ©finition d'un « incident d'IA »
  • DĂ©finir une « rĂ©ponse aux incidents d'IA »
  • Feuille de route de la base de donnĂ©es
  • Travaux connexes
  • TĂ©lĂ©charger la base de donnĂ©es complète

Projet et communauté

  • Ă€ propos de
  • Contacter et suivre
  • Applications et rĂ©sumĂ©s
  • Guide de l'Ă©diteur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalĂ©s
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2026 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialitĂ©
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 537d5f5