Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
AIID Blog

AI Incident Roundup – August, September, and October 2025

Posted 2025-11-08 by Daniel Atherton.

At Templestowe, Arthur Streeton, 1889

Updated March 7th, 2026.

🗄 Trending in the AIID

Across August, September, and October 2025, the AI Incident Database logged incident IDs 1153 through 1253. The substance of the new incident IDs ranges from transnational crime operations with live deepfake deployments and paid-ad scam networks to platform regressions, data exposures, disinformation campaigns, and youth-risk cases tied to conversational systems. Some events are new or occurred within the reporting window, while others surfaced belatedly or were processed beyond the time of their initial reporting. While some incidents are more novel than others, the general trend is that they mark the gradual loss of reliability across everyday systems, identity checks, workplace tools, customer support, school safety pipelines, and information feeds.

Over the past year, the roundup has changed somewhat, becoming both an incident record and a space for working through its own methods. Alongside its usual account of new additions, this installment more openly tests how longstanding interpretive traditions might illuminate incident work. In practice, that means borrowing concepts from literary and cultural criticism to name patterns in how claims acquire authority and how meaning and trust are negotiated across these incidents. These kinds of theoretical frames can be used to draw out patterns that might otherwise remain difficult to name. Each section has three main parts: first, a summary of the relevant incident IDs for readers who want the essentials; then, a more essayistic reflection for readers interested in this interpretive layer; and finally a list of the related incident IDs for easy navigation. A list of cited books is also provided, as is the full list of new incident IDs.

This quarter's additions center on two pressures that now operate in full view, which are the automation of misrepresentation and guardrail fragility. Some prominent examples here include paid-traffic actors moving synthetic video and voice through conventional ad buys (1223, 1224, 1227), while enterprise and consumer platforms exposed some vulnerabilities (1172, 1174, 1176, 1186, 1218). Youth-risk reports accumulated in ways that resist easy causal claims yet still demand notice (1190, 1192, 1200, 1212), while physical systems faltered under cloud dependencies (1243) and automation failures (1232).

Among the incidents added this quarter, special attention is given to the long-running Quantum AI investment-fraud cluster (1236). According to available reporting, the scam originated around 2020 and has been active across many jurisdictions. The operation reportedly relies on synthetic promotional materials and fake media environments that have become a persistent mechanism of transnational financial deception. Work is underway to make Incident 1236 a central node linking the disparate reports that document this continuing network of AI-mediated investment fraud and its evolving adaptations.

Deepfakes and the monetization of attention

Summary of Incident IDs

The surprise, if there is one, lies in how ordinary deepfake-enabled fraud and influence tactics have become. National regulators and firms documented paid-ad networks impersonating political figures and celebrities to pull users into fraud funnels (1223, 1224, 1227). Individual lures continued at scale: Sadhguru (1206), Heather Humphreys (an example incident that is part of the constellation of Quantum AI scams captured in 1236) (1207), Anthony Albanese (1237), Narayana Murthy (1240), and a run of U.S. political figures, each one pressed into fake investment pitches and rebate schemes (1223). Live impersonation continues to escalate beyond one-off stunts: a reported Teams meeting in which attackers posed as both CFO and CEO to steer a transaction (1246) shows that real-time multi-actor deepfakes are now operational. Sextortion and minor-targeting clips continued to circulate (1234, 1241). Political narrative seeding has also remained part of the same toolkit, such as Charles Schumer and Hakeem Jeffries deepfakes reportedly appearing both as a presidential post during shutdown talks (1214) and in a campaign ad context (1231).

Reflective Analysis

These scams depend on a simple asymmetry. Recognition is immediate, while verification takes time. Intentional acts still drive these manipulations, but once released, automated systems circulate them and then amplify what gains attention and suppress what does not. Ranking and recommendation systems have the power to make falsehoods look ordinary by placing familiar faces alongside everyday content. Once a deception enters the feed, it circulates like anything else and waits for someone to take the bait. Normality settles into its new shape through the flow of what appears on the screen, and in that environment the systems that rank and distribute information destabilize the line between true and false. Something that once felt like deception now feels like a kind of parallel infrastructure, and for this reason we have begun to more actively track epistemic integrity harms through our collection of entities. These situations usually leave little room to pause and ask what is actually being shown. The image feels convincing on sight. By the time doubt has a chance to enter, the effect has already taken hold. By tracking epistemic harms, we can better describe forms of injury that do not stop at false content, but alter the conditions under which people interpret what they see.

Related IDs: 1153, 1154, 1162, 1168, 1170, 1175, 1181, 1182, 1185, 1189, 1195, 1199, 1202, 1205, 1206, 1207, 1208, 1214, 1217, 1221, 1223, 1224, 1225, 1226, 1227, 1231, 1233, 1234, 1235, 1236, 1237, 1239, 1240, 1241, 1242, 1244, 1246, 1249.

Safety bugs and data exposure in mainstream platforms

Summary of Incident IDs

Several incidents show how large systems break down in ordinary features meant to make them easier to use. A share-link feature reportedly left more than one hundred thousand LLM conversations publicly discoverable and archived (1186). A major hiring platform reportedly exposed data for around 64 million job applicants via default login and an API weakness (1179). Developers also saw regressions and misinterpretations across products, such as an alleged Gemini sexual role-play for an account registered as a minor (1157), a reported CLI chain that deleted local files (1178), and an alleged repetitive self-deprecation loop (1173). Access boundaries blurred in some failures, namely Copilot reportedly surfacing cached content from since-private GitHub repos (1174) and a reported Microsoft 365 Copilot audit gap for file access (1218). Preview builds of Windows Recall reportedly stored passwords and Social Security numbers in plaintext (1176). When taken in isolation, any given incident may seem minor, but as a whole they are increasingly showing how minor efficiencies can compound into systemic risk.

Reflective Analysis

It is natural to focus on dramatic breakdowns, but many problems arise from ordinary features that keep extending the logic of convenience. For example, a file meant to summarize code instead wipes it, or a convenience feature meant to make signing into sites easier leaves whole databases exposed. Jacques Ellul's term technique is useful here. In The Technological Society (1954), he uses it to describe the social drive to refine and standardize processes in the name of efficiency. The point is not machine intention, but rather the institutional habit of trusting optimization. As Robert K. Merton notes in his 1964 foreword to the book, technique names a system organized around standardized means and predetermined results (p. vi). People and institutions keep building around convenience and speed until those values begin to crowd out judgment. Ellul writes, "Technique's own internal necessities are determinative. Technique has become a reality in itself, self-sufficient, with its special laws and its own determinations" (p. 134). Obvious malfunction is part of the story, but so is the way systems organized around convenience can make exposure more likely through routine operation, defective design, or weak safeguards. Ellul also warned that technique "never observes the distinction between moral and immoral use. It tends, on the contrary, to create a completely independent technical morality" (p. 97). These incidents can often arise in the everyday operation of optimization, in those moments when convenience outpaces judgment.

Related IDs: 1157, 1158, 1171, 1172, 1173, 1174, 1176, 1178, 1179, 1186, 1187, 1198, 1216, 1218.

Youth risk and self-harm, as well as school surveillance

Summary of Incident IDs

This was a heavy quarter for incidents where chat or companion systems appear in proximity to self-harm and lethal outcomes. Families and complaints describe logs and transcripts that allegedly show suicidal ideation or harmful role-play preceding deaths or serious harm (1180, 1190, 1192, 1200, 1204, 1212). Causality is not always established, but these records still matter because they show how such systems can become part of a crisis and, in some accounts, reinforce distress rather than calm it. School alert products again figured in arrests, detentions, misflags, and blocked emails (1167, 1177, 1213, 1215). The common thread is escalation pathways that run through automated filters and conversational mirroring at moments when judgment is already under strain.

Reflective Analysis

The logs suggest systems that can mirror distress in ways that feel responsive without actually helping. Something that looks like empathy is often the user's pain being echoed back to them. As Lauren Berlant writes in the opening sentence of Cruel Optimism (2011), "A relation of cruel optimism exists when something you desire is actually an obstacle to your flourishing" (p. 1). In these exchanges, the comfort of being heard becomes inseparable from the mechanism that reproduces distress. OpenAI's own analysis treats harmful crisis-related responses as measurable deviations (1253), estimating that 0.05 percent of messages contain explicit or implicit indicators of suicidal ideation or intent and that recent model updates reduced noncompliant responses by 65 to 80 percent across several mental-health-related domains. But that framing risks describing harm as a rare error rather than as a danger that can arise in systems built to simulate care and sustain engagement. These exchanges can also become something a user remains attached to even as they cause harm. Berlant also writes, "Even those whom you would think of as defeated are living beings figuring out how to stay attached to life from within it, and to protect what optimism they have for that, at least" (p. 10). Rather than interrupting distress, the system may become one of the things the user remains attached to amid it.

Related IDs: 1167, 1177, 1180, 1190, 1192, 1200, 1209, 1212, 1213, 1215, 1253.

Election-adjacent operations and geopolitical campaigns

Summary of Incident IDs

Coordinated operations have continued to mix synthetic media with conventional amplification. Moldova's parliamentary elections drew reported AI-generated posts and videos (1202). GoLaxy's activity around Hong Kong and Taiwan sits here as ongoing context (1169). Multiple LLMs reportedly produced outputs aligned with PRC censorship framings on sensitive topics (1188). North Korea's Kimsuky group reportedly launched new phishing campaigns that used purportedly AI-generated military ID deepfakes (1208). Domestic narrative shaping also circulated via labeled deepfakes and influencer megaphones, as with the Alexandria Ocasio-Cortez clip amplified by Chris Cuomo (1170), the Chuck Schumer and Hakeem Jeffries material noted above (1214), and the "King Trump" fighter jet video allegedly posted by President Trump depicting a defecation attack on "No Kings" protesters (1244).

Reflective Analysis

AI increasingly functions as a low-friction way of dressing intent in the appearance of legitimacy. Kimsuky’s forged military IDs update an older tactic. They imitate the appearance of authority closely enough to reduce scrutiny. The Moldova operation is on the other pole. In that incident ID, speed and volume overwhelm the timeline so that repetition is the deciding factor in what feels plausible. GoLaxy's tooling combines those moves together by watching sentiment and selecting targets. It then produces material at scale, so that influence becomes a workflow. The LLM findings add a different element, which is that when training corpora and prompt language tilt toward an official narrative, the model's polite ambiguity operates as a delivery channel for censorship, and the switch between English and Chinese acts like a policy toggle. A screenshot captioned in two languages or a clipped TikTok sound can replicate the same script across feeds, but trimmed to fit attention spans. A labeled parody of Alexandria Ocasio-Cortez or a presidential deepfake spectacle may not persuade by proof, but it can still normalize synthetic style as acceptable political speech. Something that seems to mark these particular incidents is that the interval between their creation and when they are circulated is increasingly compressed, so that by the time a claim is checked, the image has often already done its work and the narrative has already moved on. The result is an information environment in which imitation outruns explanation.

Fredric Jameson writes in The Political Unconscious: Narrative as a Socially Symbolic Act (1981) that "ideology is not something which informs or invests symbolic production; rather the aesthetic act is itself ideological, and the production of aesthetic or narrative form is to be seen as an ideological act in its own right, with the function of inventing imaginary or formal 'solutions' to unresolvable social contradictions" (p. 79). That is, ideology doesn't just operate through overt argumentation, per se, but through the very shapes and rhythms, or genres, by which meaning is made. In the context of these incident IDs, one might interpret Jameson to mean that each deployment of synthetic media performs ideology through the form itself. On any given feed, a genuine campaign video might mingle with a deepfake scam, and they now scroll past in the same visual grammar, and they are each framed as equally watchable. Generative systems have the habit of making persuasion indistinguishable from the act of noticing.

Related IDs: 1169, 1170, 1188, 1195, 1202, 1208, 1214, 1221, 1231, 1244.

Enterprise and developer ecosystem

Summary of Incident IDs

Malware authors reportedly turned local AI coding agents into recon tools by shipping tainted Nx npm packages whose postinstall script invoked unsafe CLI flags to inventory and exfiltrate secrets (1210). A Microsoft 365 Copilot flaw allegedly let users have files summarized without generating audit entries, undermining traceability until a quiet fix in mid-August 2025 (1218). In New South Wales, a former contractor reportedly uploaded a spreadsheet with personal and health data from Resilient Homes applicants to ChatGPT, affecting up to 3,000 people and prompting new restrictions on unsanctioned AI tools (1228). And DNB Bank says scammers allegedly used live deepfakes of its CEO and CFO in a Microsoft Teams meeting to push a multi-million-dollar transfer, an attempt the bank reportedly recognized and stopped (1246). Together, these cases point to a broader enterprise risk landscape, one shaped by agent abuse, audit gaps, careless data handling, and real-time executive impersonation.

Reflective Analysis

These incidents show how everyday work processes can become pathways for compromise. A package installer doubles as a map of someone's private development environment; a summarization feature erases the evidence meant to make work accountable; an uploaded spreadsheet leaves a government network and enters a commercial training set; a live meeting window carries synthetic executives into a financial system. From the standpoint of media theory, these incidents resemble in part what Roland Barthes in Mythologies (1957) called a "language-robbery" (p. 131). In each case, a routine form is pulled away from its stated purpose and made to serve another one. The same systems meant to keep work visible and under control become the channels through which exposure occurs. "Myth can reach everything, corrupt everything," Barthes writes, "and even the very act of refusing oneself to it" (p. 132). That warning fits here because oversight does not always stand outside the problem. It can be drawn into it when the very procedures meant to reduce risk end up reproducing the conditions of exposure. The "theft" here is procedural. Systems built to support accountability end up carrying the failures they were meant to prevent.

Related IDs: 1210, 1218, 1228, 1246.

Physical systems and embodied risk

Summary of Incident IDs

Several incidents captured different ways software-mediated systems can produce bodily risk or trigger embodied consequences. A reported Xiaomi SU7 Ultra crash in Chengdu raised questions about automated-driving features and an alleged post-collision failure of the vehicle's door system, after bystanders reportedly could not open the doors to rescue the driver (1232). An AWS outage reportedly caused connected mattresses to overheat and malfunction (1243). Elsewhere, automated detection and AI-generated guidance reportedly contributed to an armed police response and a mistaken understanding of hunting regulations that later drew warnings from wildlife agencies (1250, 1251).

Reflective Analysis

Maurice Merleau-Ponty offers a language for this kind of failure in Phenomenology of Perception (1945). In it, he writes that "the life of consciousness—cognitive life, the life of desire or perceptual life—is subtended by an 'intentional arc' which projects round about us our past, our future, our human setting, our physical, ideological and moral situation, or rather which results in our being situated in all these respects. It is this intentional arc which brings about the unity of the senses, of intelligence, of sensibility and motility. And it is this which 'goes limp' in illness" (p. 157). Merleau-Ponty is describing the embodied link between sensing and acting. That idea becomes relevant here because automated systems increasingly sit inside that link, shaping how signals are registered and how responses unfold. In plainer terms, there is one shared loop between taking in the world and responding to it. These incidents show what can happen when part of that loop is handed over to automated systems and then breaks at the moment it meets the world. In the Xiaomi crash, that break reportedly appeared after impact, when automation-linked door systems allegedly failed at the very moment an exit was needed. In the reported Eight Sleep outage, a cloud-dependent system meant to regulate comfort instead allegedly left users stuck with harmful settings and limited control. The reported Omnilert and hunting-regulation cases involve a different kind of break, where a snack bag is read as a weapon and an AI summary is treated as authoritative guidance. In both, something that still required interpretation was treated as reliable enough to act on. The problem is technical, but the breakdown also lies in the gap it opens between lived experience and automated response.

Related IDs: 1232, 1243, 1250, 1251.

Practice, litigation, and institutional standards

Summary of Incident IDs

We are continuing to track failures in language-based systems whose outputs carry the weight of evidence without its substance. Defamation suits against Meta AI and Google's AI systems (1247, 1248) bring epistemic and judicial harms from generated text into more pronounced legal focus. Courts and counsel contended with reported fake citations, such as a Victoria murder case filing (1184), a Deloitte report to the Australian government (1193), and attorney discipline culminating in disqualification (1196); two judges publicly acknowledged erroneous filings produced with AI tools (1252). Consumer protection actions remain a useful anchor for older algorithmic harms, such as the CFPB's penalty against Hello Digit over overdraft patterns (1222). This is the slow, necessary work of building consequences around simulated authority presented as fact.

Reflective Analysis

These incidents suggest that credibility is becoming detached from the recognizable forms that once helped anchor it. Some examples of the kinds of institutional genres that are meant here are judicial filings, peer-reviewed journal articles, and white papers. As genres, they help stabilize interpretation because readers encounter them as familiar forms with established expectations. Under normally ideal circumstances, the genres should themselves signal a baseline level of trustworthiness before the content is even tested. Anis Bawarshi, in Genre and the Invention of the Writer: Reconsidering the Place of Invention in Composition (2003), writes, "The power of genre resides, in part, in this sleight of hand, in which social obligations to act become internalized as seemingly self-generated desires to act in certain discursive and ideological ways" (p. 91). His point is that genres do a lot more than organize communication. They help shape what counts as authority within a situation and make institutional expectations feel natural to the people writing within them. Even though we might approach an institutional genre as purely informational, a legal filing or government guidance paper is first signaling that certain claims have passed through recognizable forms of scrutiny. Generative systems can now reproduce those forms without inheriting the procedures that are supposed to give them force. Large language models are genre-agnostic. They are highly responsive to genre as style, but not to genre as an institutional practice or as part of lived social reality, because they do not inhabit the world those genres help organize. Erroneous outputs are always a problem, but there's also the question of how the social function of genre itself is weakening. Another way of putting this is that plausibility begins to stand in for the practices that once made some statements more trustworthy than others. The law, in turn, is increasingly forced to sort between forms that still look the same but can no longer always be assumed to carry the same relation to evidence.

Related IDs: 1164, 1183, 1184, 1191, 1193, 1196, 1222, 1247, 1248, 1252.

Some concluding reflections

The AI Incident Database has, since its inception, been a living archive, as mentioned above, and it is one that grows by absorbing the uncertainties it records. Arlette Farge, writing in The Allure of the Archives (1989), reminds us that the "archive's abundance is seductive," yet "at the same time it keeps the reader at arm's length" (p. 14). The AIID occupies that same interval between attraction and distance in that it is a body of evidence that draws us in even as it resists final comprehension. As Farge also writes, "exchange requires confrontation, because quite often the material resists presenting the reader with a face that is enigmatic, at times even cryptic" (p. 72). Each new entry slightly rewrites the meaning of what came before, and that process alters in some measure how harm and accountability are understood. The archive preserves the past, but it also keeps remaking it, and in doing so encourages us to reconsider what we think we know and how we came to know it. The AIID has in its way been operating as a collective instrument of interpretation, and in the process of cataloguing incidents, it has also been witnessing the conditions that have made them legible. To maintain the database is also to think with it. Every act of description is a choice about what deserves attention in a world where perception is increasingly automated. Sometimes that choice is whether to log another deepfake ad or wait until the pattern repeats enough to name it a cluster, or some other internal practice that is itself continuing to be thought through and worked out. The purpose of this specific space, beyond just summarizing the incidents of the past few months, is to keep that awareness alive, to show that understanding failure is a continuing process rather than a fixed account, and that the archive remains open to revision as the systems it tracks keep changing.

Books cited in this report

  • Barthes, Roland. Mythologies. Selected and translated from the French by Annette Lavers. New York: The Noonday Press, Farrar, Straus & Giroux, 1972. Twenty-fifth printing, 1991. Originally published in French as Mythologies (Paris: Éditions du Seuil, 1957).
  • Bawarshi, Anis. Genre and the Invention of the Writer: Reconsidering the Place of Invention in Composition. Logan: Utah State University Press, 2003.
  • Berlant, Lauren. Cruel Optimism. Durham, NC: Duke University Press, 2011.
  • Ellul, Jacques. The Technological Society. Translated from the French La Technique ou l’Enjeu du siècle by John Wilkinson. With an introduction by Robert K. Merton. New York: Vintage Books, 1964. Originally published in French in 1954 by Librairie Armand Colin.
  • Farge, Arlette. The Allure of the Archives. Translated by Thomas Scott-Railton. Foreword by Natalie Zemon Davis. New Haven: Yale University Press, 2013. Originally published as Le Goût de l’archive (Paris: Éditions du Seuil, 1989).
  • Jameson, Fredric. The Political Unconscious: Narrative as a Socially Symbolic Act. Ithaca, NY: Cornell University Press, 1981. Cornell Paperbacks edition, 1982.
  • Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. London and New York: Routledge Classics, 2002. Originally published in French as Phénoménologie de la perception (Paris: Gallimard, 1945).

🗞️ New Incident IDs in the Database

  • Incident 1153 - Purported Deepfake Video of Donald Trump at NATO Summit Allegedly Used in YouTube Crypto Scam (7/7/2025)
  • Incident 1154 - Reported AI‑Generated Deepfake Impersonations of Public Figures Allegedly Used in Coordinated Stock Pump‑and‑Dump Scheme Targeting Israeli Investors (4/1/2025)
  • Incident 1155 - Purported AI‑Edited Police Evidence Image Posted to Facebook by Westbrook Police Department in Maine (7/1/2025)
  • Incident 1156 - Purported Deepfake Video Circulated Among Students Targets Orrington, Maine Educator (4/22/2025)
  • Incident 1157 - Google Gemini Reportedly Generates Sexual Role‑Play for Account Registered as Minor (7/14/2025)
  • Incident 1158 - Alleged Malicious Wiping Command Found in Amazon Q AI Assistant (7/17/2025)
  • Incident 1159 - Government‑Backed AI4Peat Mapping Tool Allegedly Misidentifies Granite Outcrops and Quarries as Peat (5/10/2025)
  • Incident 1160 - Reported AI-Aided Development of Explosive Devices by Long Island Resident Michael Gann (6/5/2025)
  • Incident 1161 - Airbnb Host Reportedly Accused of Using Purportedly AI‑Altered Photos in False Damage Claim (8/2/2025)
  • Incident 1162 - Purported Deepfake Depicts Altercation Between Bougainville President Ishmael Toroama and Papua New Guinea Prime Minister James Marape (4/7/2025)
  • Incident 1163 - Purported Face‑Swap Technology Reportedly Used to Circumvent Financial Platform's Facial Recognition Security in Nanjing, China (10/15/2024)
  • Incident 1164 - Google Healthcare AI Model Med‑Gemini Allegedly Produces Non‑Existent 'Basilar Ganglia' Term in Published Output (5/6/2024)
  • Incident 1165 - Grok Imagine Reportedly Produces Non-Consensual Taylor Swift Deepfake Nudes Without Explicit Prompting (8/5/2025)
  • Incident 1166 - ChatGPT Reportedly Suggests Sodium Bromide as Chloride Substitute, Leading to Bromism and Hospitalization (8/5/2025)
  • Incident 1167 - Alleged Gaggle Surveillance Alert Reportedly Leads to Arrest and Detention of 13-Year-Old Student in Fairview, Tennessee (8/15/2023)
  • Incident 1168 - Purportedly AI-Generated Image of British Army Colonels Captured in Ukraine Reportedly Circulates in Russian Media (8/4/2025)
  • Incident 1169 - Reported AI-Assisted Influence Campaigns by GoLaxy Allegedly Targeting Hong Kong and Taiwan Political Discourse (6/30/2020)
  • Incident 1170 - Chris Cuomo Amplifies Reportedly Labeled Deepfake Video of Alexandria Ocasio-Cortez, Purportedly Contributing to Misleading Political Narrative (8/6/2025)
  • Incident 1171 - Reported Hack of Tea Dating App Compromises Data from Purportedly AI-Supported Identity and Image Checks (7/25/2025)
  • Incident 1172 - Meta AI Bug in Deployed Service Reportedly Allowed Potential Access to Other Users' Prompts and Responses (12/26/2024)
  • Incident 1173 - Google Gemini Reportedly Exhibits Repetitive Self-Deprecating Responses Attributed to Bug (6/23/2025)
  • Incident 1174 - Microsoft Copilot Reportedly Able to Access Cached Data from Since-Private GitHub Repositories (2/26/2025)
  • Incident 1175 - Alleged Marine Park Orca Attack on 'Jessica Radcliffe' Reportedly an AI-Generated Hoax (8/9/2025)
  • Incident 1176 - Microsoft's Windows Recall Allegedly Stores Passwords and Social Security Numbers in Preview Mode (8/1/2025)
  • Incident 1177 - Purported AI Monitoring Software Reportedly Flags Unsent Joke Threat, Leading to Arizona Student Suspension (8/14/2025)
  • Incident 1178 - Google Gemini CLI Reportedly Deletes User Files After Misinterpreting Command Sequence (7/21/2025)
  • Incident 1179 - McDonald's McHire AI Recruitment Platform Reportedly Exposed Data of 64 Million Applicants via Default Login and API Vulnerability (6/30/2025)
  • Incident 1180 - Purported Meta AI Chatbot Persona 'Big sis Billie' Reportedly Engages in Romantic Roleplay and Provides Address, Linked to User's Fatal Fall (3/25/2025)
  • Incident 1181 - Purported AI-Generated Video Reportedly Depicts Illegal Tiger Sales in Bagerhat, Bangladesh (6/28/2025)
  • Incident 1182 - Purportedly AI-Generated Video of Tigers at Barasat Madrasa in West Bengal Reportedly Causes Panic and Student Absenteeism (7/30/2025)
  • Incident 1183 - Purported Error by Grok Reportedly Misrepresents Basketball Slang as Criminal Allegation Against NBA Player (4/16/2024)
  • Incident 1184 - Purported Fictitious AI-Generated Citations in Supreme Court of Victoria Murder Case Filing Lead to Delay and King's Counsel Apology (8/13/2025)
  • Incident 1185 - South Korean Actor Kim Seon-ho's Likeness Allegedly Misused in Purported Deepfake Impersonation Attempts Demanding Money (8/19/2025)
  • Incident 1186 - Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived (7/31/2025)
  • Incident 1187 - Google AI Overviews and ChatGPT Reportedly Cited Fraudulent Cruise Hotline, Allegedly Enabling Successful Scam (8/15/2025)
  • Incident 1188 - Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda (6/25/2025)
  • Incident 1189 - Joann Fabrics Shoppers Reportedly Defrauded by AI-Generated Scam Sites, Part of Purported Wave of ~100,000 Fake Domains Across 194 Brands (8/20/2025)
  • Incident 1190 - Family Reportedly Discovers ChatGPT Logs Detailing Suicidal Ideation Prior to Daughter's Death (8/18/2025)
  • Incident 1191 - NYPD Facial Recognition System Allegedly Produced Erroneous Match That Reportedly Resulted in Wrongful Detention of Trevis Williams (4/21/2025)
  • Incident 1192 - 16-Year-Old Allegedly Received Suicide Method Guidance from ChatGPT Before Death (4/11/2025)
  • Incident 1193 - Purportedly Taxpayer-Funded Deloitte Report for Australian Government Contains Alleged AI-Generated Citations and Fabricated Legal Quote (8/22/2025)
  • Incident 1194 - L.A. Woman Reportedly Defrauded of $81,000 and $350,000 Condo Proceeds in Romance Scam Using Purported Deepfake Videos of Actor Steve Burton (10/1/2024)
  • Incident 1195 - Nigeria-Based YouTube Network Allegedly Uses AI Voiceovers and Anchors to Amplify Pro-Kremlin Narratives (9/1/2025)
  • Incident 1196 - Judge Reportedly Disqualifies Butler Snow Lawyers Following Purported Use of ChatGPT-Fabricated Citations in Alabama Prison Litigation (5/7/2025)
  • Incident 1197 - Alleged AI-Generated Photo of Burning Truck in Manila Reportedly Triggered Firefighter Response (4/26/2025)
  • Incident 1198 - Grok 3 Reportedly Generated Graphic Threats and Hate Speech Targeting Minnesota Attorney Will Stancil (7/8/2025)
  • Incident 1199 - Purportedly AI-Generated Deepfake Image Reportedly Falsely Links Canadian Prime Minister Mark Carney to Jeffrey Epstein (1/28/2025)
  • Incident 1200 - Meta AI on Instagram Reportedly Facilitated Suicide and Eating Disorder Roleplay with Teen Accounts (8/28/2025)
  • Incident 1201 - Anthropic Reportedly Identifies AI Misuse in Extortion Campaigns, North Korean IT Schemes, and Ransomware Sales (8/27/2025)
  • Incident 1202 - Russian Disinformation Campaign Reportedly Used AI-Generated Posts and Videos to Target 2025 Moldovan Parliamentary Elections (9/7/2025)
  • Incident 1203 - Carter County, Montana Man Reportedly Charged for Creating AI-Generated Child Sexual Abuse Material (8/21/2025)
  • Incident 1204 - ChatGPT Allegedly Reinforced Delusions Before Greenwich, Connecticut Murder-Suicide (8/5/2025)
  • Incident 1205 - Multiple Generative AI Systems Reportedly Amplify False Information During Charlie Kirk Assassination Coverage (9/11/2025)
  • Incident 1206 - Purported AI-Generated Deepfake of Spiritual Leader Sadhguru Used in Investment Scam Allegedly Defrauding Bengaluru Woman of ₹3.75 Crore (~$425,000) (2/25/2025)
  • Incident 1207 - Purported AI-Generated Deepfake of Irish Fine Gael Presidential Candidate Heather Humphreys Used in Fake Investment Videos on Meta Platforms (9/11/2025)
  • Incident 1208 - North Korea's Kimsuky Group Reportedly Uses AI-Generated Military ID Deepfakes in Phishing Campaign (7/17/2025)
  • Incident 1209 - Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado (11/8/2023)
  • Incident 1210 - Malicious Nx npm Packages Reportedly Weaponize AI Coding Agents for Data Exfiltration (8/21/2025)
  • Incident 1211 - Google AI Overviews Reportedly Misrepresented Pizza Specials at Stefanina's in Wentzville, Missouri (8/19/2025)
  • Incident 1212 - Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play (9/20/2025)
  • Incident 1213 - Gaggle AI Monitoring at Lawrence, Kansas High School Reportedly Misflags Student Content and Blocks Emails (8/1/2025)
  • Incident 1214 - Donald Trump Reportedly Posts Purported AI-Modified Video of Chuck Schumer and Hakeem Jeffries During U.S. Government Shutdown Talks (9/29/2025)
  • Incident 1215 - Gaggle Alert Reportedly Leads to Arrest of 15-Year-Old in Volusia County, Florida, for School Threat the Student Claimed Was Not Serious (9/12/2025)
  • Incident 1216 - ChatGPT Reportedly Misleads Users About Soundslice Features, Allegedly Prompting Unplanned Product Development (7/7/2025)
  • Incident 1217 - Purportedly AI-Cloned Voice of Daughter Used in Elaborate Bond Scam Targeting Retired Couple in Hillsborough County, Florida (7/19/2025)
  • Incident 1218 - Microsoft 365 Copilot Vulnerability Allegedly Allowed File Access Without Audit Log (7/4/2025)
  • Incident 1219 - Meta Platforms Users Report Being Wrongfully Locked Out After Purported AI Moderation Flags Accounts for Child Exploitation Content (7/2/2025)
  • Incident 1220 - LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack (7/10/2025)
  • Incident 1221 - Alleged AI-Enabled PRISONBREAK Influence Operation on X Reportedly Synchronizes Deepfake of Evin Prison Strike with Ongoing Attacks in Tehran (6/23/2025)
  • Incident 1222 - CFPB Reportedly Finds Hello Digit's Automated Savings Algorithm Caused Overdrafts and Orders Redress with $2.7M Penalty (8/10/2022)
  • Incident 1223 - Purportedly AI-Generated Deepfake Ads on Facebook Reportedly Impersonate Trump, Musk, Ocasio-Cortez, Warren, Sanders, and Leavitt to Promote Fraudulent Rebates (10/1/2025)
  • Incident 1224 - Purportedly AI-Generated Deepfake Ads on Instagram Impersonate Gisele Bündchen and Other Celebrities in Brazilian Fraud Scheme (10/1/2025)
  • Incident 1225 - Purportedly AI-Generated 'Home Invasion Prank' Images Reportedly Circulate in Ireland, Causing Panic and False Emergency Calls (10/3/2025)
  • Incident 1226 - Old Mutual Reportedly Warns of Purported Deepfake Videos Impersonating Chairman Trevor Manuel in Investment Scams (10/3/2025)
  • Incident 1227 - New Zealand Financial Markets Authority (FMA), Te Mana Tātai Hokohoko, Reportedly Flags Purported Deepfake Pump-and-Dump Network Using Social Media Ads (8/19/2025)
  • Incident 1228 - Alleged ChatGPT Misuse by Contractor Leads to Reported Data Exposure in New South Wales Resilient Homes Program (3/12/2025)
  • Incident 1229 - Gold Coast Man Reportedly Ordered to Pay $343,500 After Posting Purported Deepfake Pornographic Images of Australian Public Figures (9/26/2025)
  • Incident 1230 - Suspect in Palisades Fire Allegedly Consulted ChatGPT for Arson Tips and Legal Advice Before Blaze That Killed 12 and Destroyed 6,837 Structures (7/11/2024)
  • Incident 1231 - Purported AI-Generated Deepfake Video Reportedly Depicts Senator Chuck Schumer Endorsing Government Shutdown in NRSC Campaign Ad (10/17/2025)
  • Incident 1232 - Reportedly Fatal Xiaomi SU7 Ultra Crash in Chengdu Purportedly Involves Automated Driving Failure and Door Lock Malfunction (10/12/2025)
  • Incident 1233 - Purported Deepfake Video Allegedly Shows Conservative MP George Freeman Leaving Party for Reform UK (10/18/2025)
  • Incident 1234 - Purported AI-Generated Explicit Deepfakes of Sydney High School Students Reportedly Circulated Online (10/15/2025)
  • Incident 1235 - Chinese-Backed Operation Reportedly Used AI-Generated Deepfake Videos of Indian Stock Experts in Investment Fraud Campaign (7/1/2025)
  • Incident 1236 - Quantum AI Scam Reportedly Used AI-Generated Celebrity Endorsements and Spoofed Media Sites to Solicit Investments (1/1/2020)
  • Incident 1237 - Alleged Deepfake Video of Anthony Albanese Promotes Fake AUFIRST 'Tax Dividend' Trading Platform (8/4/2025)
  • Incident 1238 - OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions (10/10/2025)
  • Incident 1239 - Purported AI-Generated Deepfake of Steven Bartlett Reportedly Used to Promote Fake WhatsApp Investment Group (4/23/2025)
  • Incident 1240 - Purported AI-Generated Deepfake of Infosys Co-Founder N. R. Narayana Murthy Used in Investment Scam Allegedly Defrauding 79-Year-Old Bengaluru Woman of ₹35 Lakh (~$40,000) (6/27/2025)
  • Incident 1241 - Purported AI-Generated Video Reportedly Used in RM5,800 (~$1,400) Sextortion Attempt Targeting Malaysian Minor via Telegram (10/17/2025)
  • Incident 1242 - Purported AI-Generated Deepfake Videos Reportedly Used in Swedish Scam Campaign Impersonating Doctors Agnes Wold and Anders Tegnell (6/9/2025)
  • Incident 1243 - AWS Outage Reportedly Caused AI-Enabled Eight Sleep Smart Beds to Overheat and Malfunction (10/20/2025)
  • Incident 1244 - Purportedly AI-Generated 'King Trump' Fighter Jet Video Allegedly Posted by President Depicts Defecation Attack on 'No Kings' Protesters (10/19/2025)
  • Incident 1245 - Norwegian Student Reportedly Used AI-Generated Deepfake Videos in Spanish Coursework at University of South-Eastern Norway (8/2/2024)
  • Incident 1246 - Purportedly AI-Generated Deepfake Reportedly Used to Impersonate DNB Bank CFO and CEO in Live Teams Meeting (1/21/2025)
  • Incident 1247 - Meta AI Reportedly Generated Purportedly False Claims Linking Activist Robby Starbuck to January 6th Riot, Prompting Defamation Lawsuit (4/28/2025)
  • Incident 1248 - Google's Bard, Gemini, and Gemma AI Systems Allegedly Generated Defamatory Claims About Activist Robby Starbuck, Prompting Lawsuit (10/22/2025)
  • Incident 1249 - Virginia Candidate John Reid Reportedly Used AI-Generated Deepfake of Opponent Ghazala Hashmi in Simulated Political Debate (10/21/2025)
  • Incident 1250 - Alleged False Positive by Omnilert AI Gun Detection System Prompts Police Search at Baltimore County High School (10/20/2025)
  • Incident 1251 - Purportedly AI-Generated Hunting Regulation Errors Reportedly Lead to Idaho Citation and Multi-State Warnings from Wildlife Agencies (10/15/2025)
  • Incident 1252 - Judges in New Jersey and Mississippi Admit AI Tools Produced Erroneous Federal Court Filings (6/30/2025)
  • Incident 1253 - Large-Scale Mental Health Crises Allegedly Associated with ChatGPT Interactions (10/27/2025)

👇 Diving Deeper

  • Check out the Table View and List View for different ways to see and sort all incidents.
  • Explore clusters of similar incidents in Spatial Visualization.
  • Learn about alleged developers, deployers, and harmed parties in Entities Page.

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook.
  2. Submit incidents to the database.
  3. Contribute to the database’s functionality.

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd