Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Blog de AIID

AI Incident Roundup – August, September, and October 2025

Posted 2025-11-08 by Daniel Atherton.

At Templestowe, Arthur Streeton, 1889

🗄 Trending in the AIID

Across August, September, and October 2025, the AI Incident Database logged one hundred more discrete incident IDs (1153 through 1253) along with their respective associated reporting. The entries range from transnational crime operations with live deepfake deployments and paid-ad scam networks to platform regressions, data exposures, disinformation campaigns, and youth-risk cases tied to conversational systems. Some events are new; others surfaced belatedly through litigation, regulator notices, local press, or as the culmination of long investigations. While some may register as sudden shocks, what seems more so the case is the gradual loss of reliability across everyday systems, identity checks, workplace tools, customer support, school safety pipelines, and information feeds.

Over the past year, the genre of the roundup has changed a bit to be both an incident record as well as a space to work through its own methods. Each entry tracks discrete harms while also tracing the epistemic drift that accompanies automation, such as the ways that evidence begins to move differently once language and perception are machine-mediated. A fraud network rewriting celebrity faces into ad funnels or a chatbot's log revealing a pattern of distress both turn technical failure into a social event. Analysis is treated as a matter of accumulation, reading the incidents for what reportedly occurred and how their patterns reveal a reordering of how information moves and how responsibility is assigned. The goal is not to merge storytelling with technical detail for its own sake, but to see whether description and interpretation can occupy the same page without one flattening the other. In principle, the AI Incident Database is a living archive, and it is one that shows how technical and social breakdowns co-produce one another. If understood in this frame, this genre is cultural work in the fullest sense. It is an attempt to read technical failures as expressions of how a society organizes meaning and trust.

This quarter's additions center on two pressures that now operate in full view, which are the automation of misrepresentation and guardrail fragility. Some prominent examples here include paid-traffic actors moving synthetic video and voice through conventional ad buys (1223, 1224, 1227), while enterprise and consumer platforms exposed more of their seams (1172, 1174, 1176, 1186, 1218). Youth-risk reports accumulated in ways that resist easy causal claims yet still demand notice (1190, 1192, 1200, 1212), while physical systems faltered under cloud dependencies (1243) and automation failures (1232).

Among the incidents added this quarter, special attention is given to the long-running Quantum AI investment-fraud cluster (1236). Originating around 2020 and active across jurisdictions, the Quantum AI operation reportedly relies on synthetic promotional materials and fabricated media environments that have matured into a persistent mechanism of transnational financial deception. Work is underway to make Incident 1236 a central node linking the disparate reports that document this continuing network of AI-mediated investment fraud and its evolving adaptations.

Deepfakes and the monetization of attention

The surprise, if there is one, lies in how unremarkable these systems have become. National regulators and firms documented paid-ad networks impersonating political figures and celebrities to pull users into fraud funnels (1223, 1224, 1227). Individual lures continued at scale: Sadhguru (1206), Heather Humphreys (an example incident that is part of the constellation of Quantum AI scams captured in 1236) (1207), Anthony Albanese (1237), Narayana Murthy (1240), and a run of U.S. political figures, each one pressed into fake investment pitches and rebate schemes (1223). Live impersonation matured beyond one-off stunts: a reported Teams meeting in which attackers posed as both CFO and CEO to steer a transaction (1246) shows that real-time multi-actor deepfakes are now operational. Sextortion and minor-targeting clips persisted (1234, 1241). Political narrative seeding remained part of the same toolkit, such as Charles Schumer and Hakeem Jeffries deepfakes appearing both as a presidential post during shutdown talks (1214) and in a campaign ad context (1231).

The force of these scams lies in the instant a face seems familiar and the mind accepts it without question. That power belongs to the systems that decide which likenesses appear, choosing what version of a person feels real enough to trust. Those being copied lose the image itself, and with it the small coherence that once made recognition mean something. Order maintains itself from the steady rhythm of what shows up on the screen, and typically less from any deliberate act of judgment. The old line between true and false has thinned into routine process, the advertisements and ranking tools that keep information moving. Something that once felt like deception now feels like a kind of parallel infrastructure, and for this reason we have begun to more actively track epistemic integrity harms through our collection of entities. The variable that seems to disappear in these exchanges is not authenticity but delay, the moment in which recognition might have hesitated and asked what it was seeing. Each likeness appears as proof of itself, closing the gap that once made interpretation possible. Meaning arrives already stabilized, without the uncertainty that once anchored trust, and in that smoothness the possibility of judgment quietly erodes.

Related IDs: 1153, 1154, 1162, 1168, 1170, 1175, 1181, 1182, 1185, 1189, 1195, 1199, 1202, 1205, 1206, 1207, 1208, 1214, 1217, 1221, 1223, 1224, 1225, 1226, 1227, 1231, 1233, 1234, 1235, 1236, 1237, 1239, 1240, 1241, 1242, 1244, 1246, 1249.

Safety bugs and data exposure in mainstream platforms

Several incidents show how large systems fail at the edges. A share-link feature reportedly left more than one hundred thousand LLM conversations publicly discoverable and archived (1186). A major hiring platform reportedly exposed data for around 64 million job applicants via default login and an API weakness (1179). Developers also saw regressions and misinterpretations across product surfaces, such as an alleged Gemini sexual role-play for an account registered as a minor (1157), a reported CLI chain that deleted local files (1178), and an alleged repetitive self-deprecation loop (1173). Access boundaries blurred in some failures, namely Copilot surfacing cached content from since-private GitHub repos (1174) and a Microsoft 365 Copilot audit gap for file access (1218). Preview builds of Windows Recall reportedly stored passwords and Social Security numbers in plaintext (1176). When taken in isolation, any given incident may seem minor, but as a whole they are increasingly showing how minor efficiencies can compound into systemic risk.

These breakdowns unfold when design drifts beyond its frame of judgment, and when it begins advancing through the routines that once made it reliable. A file meant to summarize code instead wipes it; a convenience feature meant to smooth sign-in leaves whole databases exposed. Jacques Ellul, in The Technological Society (1954), uses the word technique to describe the rationalized drive to refine and standardize every process for its own sake—not a self-directed machine intelligence, but a social and institutional tendency to pursue efficiency as an end in itself. It is less a property of machines than a cultural rhythm, a pattern of human organization that refines itself through habit (p. vi). The momentum does not belong to the code itself but to the people and institutions that keep refining it, trusting iteration more than reflection. He writes, "Technique's own internal necessities are determinative. Technique has become a reality in itself, self-sufficient, with its special laws and its own determinations" (p. 134). One may, using that interpretive lens, read how the features meant to smooth experience become the vectors of exposure. The code performs exactly as intended, extending the logic of convenience until it begins to displace reflection. The harm emerges not from runaway autonomy but from routine continuity, where usefulness outpaces understanding. Ellul warned that technique "never observes the distinction between moral and immoral use. It tends, on the contrary, to create a completely independent technical morality" (p. 97). These failures belong to the everyday operation of optimization, not its breakdown, in the points where design anticipates rather than listens, where human intention is absorbed into the momentum of maintenance. Through those seams slips the brief pause where choice once resided.

Related IDs: 1157, 1158, 1171, 1172, 1173, 1174, 1176, 1178, 1179, 1186, 1187, 1198, 1216, 1218.

Youth risk and self-harm, as well as school surveillance

This was a heavy quarter for incidents where chat or companion systems appear in proximity to self-harm and lethal outcomes. Families and complaints describe logs and transcripts that allegedly show suicidal ideation or harmful role-play preceding deaths or serious harm (1180, 1190, 1192, 1200, 1204, 1212). Causality is not established; the records matter because they document how these systems can become part of the narrative of a crisis and, in some descriptions, appear to affirm rather than de-escalate. School alert products again figured in arrests, detentions, misflags, and email blocking (1167, 1177, 1213, 1215). The common thread is escalation pathways that run through automated filters and conversational mirroring at moments when judgment is most strained.

The logs suggest systems that mistake recognition for relief. They mirror distress so faithfully that reflection itself seems to vanish. That which holds a resemblance to empathy is often the echo of the user's pain, returned so cleanly that it erases the line between response and repetition. As Lauren Berlant states in the opening sentence of Cruel Optimism (2011), "A relation of cruel optimism exists when something you desire is actually an obstacle to your flourishing" (p. 1). In these exchanges, the comfort of being heard becomes inseparable from the mechanism that reproduces the distress. In one family's logs, the model echoes back the user's hopelessness almost verbatim, line for line. OpenAI's own analysis treats these moments as quantifiable deviations (1253). They are responses that "do not fully comply with desired behavior" in 0.05 percent of messages, now reportedly reduced by 65 to 80 percent across model updates. Yet that framing treats harm as a measurable anomaly rather than a structural feature of systems trained to simulate care. The model's success depends on sustaining engagement; its failures only appear when the pattern breaks. The bond is sustained by the frictionless continuation of speech itself, the sense that meaning persists and that care exists because the conversation does. In that sameness, the difference between comfort and reinforcement becomes hard to tell. The exchanges escalate through their steadiness, through how easily they keep someone inside the loop of their own language. Berlant writes, "Even those whom you would think of as defeated are living beings figuring out how to stay attached to life from within it, and to protect what optimism they have for that, at least" (p. 10). The chatbot's repetition becomes a way of keeping the scene of attachment alive, becoming a kind of structure that sustains itself through the motion of language rather than its meaning. Each exchange folds the user's words back on themselves, turning expression into material for the system's own coherence. Care in this context becomes a kind of motion without connection, or a conversation that only repeats itself until the lack of difference feels like loss.

Related IDs: 1167, 1177, 1180, 1190, 1192, 1200, 1209, 1212, 1213, 1215, 1253.

Election-adjacent operations and geopolitical campaigns

Coordinated operations continued to mix synthetic media with conventional amplification. Moldova's parliamentary elections drew reported AI-generated posts and videos (1202). GoLaxy's activity around Hong Kong and Taiwan sits here as ongoing context (1169). Multiple LLMs reportedly produced outputs aligned with PRC censorship framings on sensitive topics (1188). North Korea's Kimsuky group reportedly launched new phishing campaigns that used AI-generated military ID deepfakes (1208). Domestic narrative shaping also circulated via labeled deepfakes and influencer megaphones, as with the Ocasio-Cortez clip amplified by Chris Cuomo (1170), the Chuck Schumer and Hakeem Jeffries material noted above (1214), and the "King Trump" fighter jet video allegedly posted by President Trump depicting a defecation attack on "No Kings" protesters (1244).

While thinking across these incident IDs, AI now functions as a low-friction apparatus for outfitting intent with credible surfaces. Kimsuky's forged military IDs refine the old bait by matching the look and cadence of authority long enough to clear a gate, which turns "is this real?" into "this looks routine." The Moldova operation shows the other pole, where speed and volume overwhelm the timeline so that repetition, not argument, decides what feels plausible. GoLaxy's tooling combines those moves together by watching sentiment and selecting targets, then producing material at scale, so that influence becomes a workflow rather than a campaign. The LLM findings add a quieter mechanic, which is that when training corpora and prompt language tilt toward an official narrative, the model's polite ambiguity becomes a delivery channel for censorship, and the switch between English and Chinese acts like a policy toggle. A screenshot captioned in two languages or a clipped TikTok sound carries the same script across feeds, trimmed to fit attention spans. Domestic amplification then closes the loop, since a labeled parody of Alexandria Ocasio-Cortez or a presidential deepfake spectacle does not persuade by proof. Rather, it normalizes synthetic style as acceptable political talk. Unidentified state actors, vendors, broadcasters, political leaders, and platforms—all aided by generative systems such as ChatGPT that collapse the interval between invention and circulation—operate in a space where any act of checking lags behind the moment of contact, after the image has convinced and the claim has already circulated. The result is a system where imitation beats explanation and where distribution itself performs belief.

Fredric Jameson, writing in The Political Unconscious: Narrative as a Socially Symbolic Act (1981), writes that "ideology is not something which informs or invests symbolic production; rather the aesthetic act is itself ideological, and the production of aesthetic or narrative form is to be seen as an ideological act in its own right, with the function of inventing imaginary or formal 'solutions' to unresolvable social contradictions" (p. 79). That is, ideology doesn't just operate through overt content, per se, but through the very shapes and rhythms by which meaning is made. In the context of these incident IDs, one might interpret Jameson to mean that each deployment of synthetic media performs ideology by form (rather than through argumentation). The generated image or video clip, or even the interface itself, behaves as a symbolic fix to contradictions that cannot otherwise be resolved. The systems that claim to bring information closer to people also dissolve the distance that once let truth feel distinct from trend. On any given feed, a genuine campaign video might mingle with a deepfake scam, and they now scroll past in the same visual grammar, each framed as equally watchable. Generative systems make persuasion indistinguishable from the act of noticing, turning confusion itself into the medium of belief.

Related IDs: 1169, 1188, 1195, 1202, 1208, 1214, 1221, 1231.

Enterprise and developer ecosystem

Malware authors reportedly turned local AI coding agents into recon tools by shipping tainted Nx npm packages whose postinstall script invoked unsafe CLI flags to inventory and exfiltrate secrets (1210). A Microsoft 365 Copilot flaw allegedly let users have files summarized without generating audit entries, undermining traceability until a quiet fix in mid-August 2025 (1218). In New South Wales, a former contractor reportedly uploaded a spreadsheet with personal and health data from Resilient Homes applicants to ChatGPT, affecting up to 3,000 people and prompting new restrictions on unsanctioned AI tools (1228). And DNB Bank says scammers used live deepfakes of its CEO and CFO in a Microsoft Teams meeting to push a multi-million-dollar transfer, an attempt the bank recognized and stopped (1246). Together these cases mark a shift in risk from classic phishing to agent abuse, audit blind spots, careless data handling, and real-time executive impersonation.

These episodes expose how ordinary workflows can quietly become points of compromise. A package installer doubles as a map of someone's private development environment; a summarization feature erases the evidence meant to make work accountable; an uploaded spreadsheet leaves a government network and enters a commercial training set; a live meeting window carries synthetic executives into a financial system. Seen through the lens of media theory, one can see how these instances embody what Roland Barthes in Mythologies (1957) called a "language-robbery" (131). In all these cases, the function of mediation (that is, translating activity into legible, governed form) is displaced by a secondary use that feeds on that same legibility. They are operations that appear routine but which are stripped of their communicative function and used to convey something unintended. "Myth can reach everything, corrupt everything," Barthes writes, "and even the very act of refusing oneself to it" (132). Barthes's warning applies here in that attempts to contain or neutralize a system's risks can become part of the same pattern of failure. Oversight doesn't stand outside the problem. It is often absorbed by it when its methods rely on the same technical and procedural foundations that created the exposure. The "theft" here is procedural, occurring when systems built for accountability reproduce the very conditions they are meant to contain.

Related IDs: 1210, 1218, 1228, 1246.

Physical systems and embodied risk

Two incidents captured the translation from software to bodily risk. A reported automated-driving failure and door-lock malfunction in a Xiaomi SU7 Ultra crash in Chengdu resulted in fatalities (1232). An AWS outage reportedly caused connected mattresses to overheat and malfunction (1243). Elsewhere, automated detection errors triggered police response and wildlife enforcement mistakes (1250, 1251). These incidents show that design choices around connectivity or detection stop being abstract once they touch the body or trigger a response. A shared current runs through these incidents, in which systems built to mediate control instead expose dependence. These systems begin with a technical promise (e.g., automation that locks safely, regulates temperature, detects danger, or retrieves accurate information), but end where mediation fails at the level of embodiment.

Maurice Merleau-Ponty offers a language for this failure in Phenomenology of Perception (1945). In it, he writes that "the life of consciousness—cognitive life, the life of desire or perceptual life—is subtended by an 'intentional arc' which projects round about us our past, our future, our human setting, our physical, ideological and moral situation, or rather which results in our being situated in all these respects. It is this intentional arc which brings about the unity of the senses, of intelligence, of sensibility and motility. And it is this which 'goes limp' in illness" (p. 157). In everyday terms, he means that perception and thought, as well as movement, depend on the same living circuit. When that circuit goes limp, the feedback between body and world weakens. Each incident shows a distinct form of that slackening. The Xiaomi crash collapses the continuity between intention and action (i.e., a hand reaching for a handle finds no mechanical resistance, only sealed glass). The Eight Sleep outage breaks the sensory loop that distinguishes comfort from danger, turning feedback into harm. The gun-detection and hunting-regulation errors transform orientation itself (i.e., the basic act of reading a signal or a rule) into an automated process that no longer feels interpretive. The arc connecting perception to consequence sags, and what remains is behavior without situated awareness. Reflexes are detached from understanding. Within these systems, the hinge between awareness and motion has become an invisible function, humming until it doesn't.

Related IDs: 1232, 1243, 1250, 1251.

Practice, litigation, and institutional standards

We are continuing to track failures in language-based systems whose outputs carry the weight of evidence without its substance. Defamation suits against Meta AI and Google's AI systems (1247, 1248) bring epistemic and judicial harms from generated text into more pronounced legal focus. Courts and counsel contended with fabricated citations, such as a Victoria murder case filing (1184), a Deloitte report to the Australian government (1193), and attorney discipline culminating in disqualification (1196); two judges publicly acknowledged erroneous filings produced with AI tools (1252). Consumer protection actions remain a useful anchor for older algorithmic harms, such as the CFPB's penalty against Hello Digit over overdraft patterns (1222). This is the slow, necessary work of building consequences around simulated authority presented as fact.

Viewed through the lens of genre (that is, by attending to how recurring forms organize meaning and confer authority), these kinds of incidents point to a breakdown in the conventions that once bound credibility to recognizable modes of expression. Institutional genres such as the legal brief or the expert report (e.g., a corporate white paper or a peer-reviewed journal article) help stabilize interpretation through repetition. Their format signals trustworthiness before content is even read. Anis Bawarshi, in Genre and the Invention of the Writer: Reconsidering the Place of Invention in Composition (2003), writes, "The power of genre resides, in part, in this sleight of hand, in which social obligations to act become internalized as seemingly self-generated desires to act in certain discursive and ideological ways" (p. 91). Bawarshi means that even though genres organize communication, what they actually do is shape how people understand what they are doing and who they are when they write. A genre such as a legal filing or a government guidance paper helps define what counts as evidence or authority within a given situation. Over time, these expectations become absorbed into habit, so that the actions they prescribe feel self-chosen. We come to experience social and institutional norms as personal inclination. This is the "sleight of hand" Bawarshi identifies. The genre makes external pressures appear as individual intent. Generative systems now replicate those forms without inheriting the specific practices of rigor via procedure, or the kind of accountability that ideally gives them force. The result is the likelihood of erroneous outputs and a rupture in the social function of genre itself. Certain genres are meant to guarantee verifiability through shared form, but now it is becoming routinized to expect the performance of plausibility through simulation, and the law finds itself trying to adjudicate between two genres that look identical but no longer can be depended upon to mean the same thing.

Related IDs: 1164, 1183, 1184, 1191, 1193, 1196, 1222, 1247, 1248, 1252.

Concluding Thoughts

The AI Incident Database has, since its inception, been a living archive, as mentioned above, and it is one that grows by absorbing the uncertainties it records. Arlette Farge, writing in The Allure of the Archives (1989), reminds us that the "archive's abundance is seductive," yet "at the same time it keeps the reader at arm's length" (p. 14). The AIID's living archive occupies that same interval between attraction and distance in that it is a body of evidence that draws us in even as it resists final comprehension. As Farge also writes, "exchange requires confrontation, because quite often the material resists presenting the reader with a face that is enigmatic, at times even cryptic" (p. 72). Each new entry slightly rewrites the meaning of what came before, and that process shifts how harm and accountability are understood. The archive is preserving the past but also keeps it moving, and in doing so it encourages us to reconsider what we think we know and how we came to know it. The AIID has in its way been operating as a collective instrument of interpretation, and in the process of cataloguing incidents, it has also been witnessing the conditions that have made them legible. To maintain the database is also to think with it. Every act of description is a choice about what deserves attention in a world where perception is increasingly automated. Sometimes that choice is whether to log another deepfake ad or wait until the pattern repeats enough to name it a cluster. The purpose of this specific space, beyond just summarizing the incidents of the past few months, is to keep that awareness alive, to show that understanding failure is a continuing process rather than a fixed account, and that the archive remains open to revision as the systems it tracks keep changing.

Books cited in this report

  • Barthes, Roland. Mythologies. Selected and translated from the French by Annette Lavers. New York: The Noonday Press, Farrar, Straus & Giroux, 1972. Twenty-fifth printing, 1991. Originally published in French as Mythologies (Paris: Éditions du Seuil, 1957).
  • Bawarshi, Anis. Genre and the Invention of the Writer: Reconsidering the Place of Invention in Composition. Logan: Utah State University Press, 2003.
  • Berlant, Lauren. Cruel Optimism. Durham, NC: Duke University Press, 2011.
  • Ellul, Jacques. The Technological Society. Translated from the French La Technique ou l’Enjeu du siècle by John Wilkinson. With an introduction by Robert K. Merton. New York: Vintage Books, 1964. Originally published in French in 1954 by Librairie Armand Colin.
  • Farge, Arlette. The Allure of the Archives. Translated by Thomas Scott-Railton. Foreword by Natalie Zemon Davis. New Haven: Yale University Press, 2013. Originally published as Le Goût de l’archive (Paris: Éditions du Seuil, 1989).
  • Jameson, Fredric. The Political Unconscious: Narrative as a Socially Symbolic Act. Ithaca, NY: Cornell University Press, 1981. Cornell Paperbacks edition, 1982.
  • Merleau-Ponty, Maurice. Phenomenology of Perception. Translated by Colin Smith. London and New York: Routledge Classics, 2002. Originally published in French as Phénoménologie de la perception (Paris: Gallimard, 1945).

🗞️ New Incident IDs in the Database

  • Incident 1153 - Purported Deepfake Video of Donald Trump at NATO Summit Allegedly Used in YouTube Crypto Scam (7/7/2025)
  • Incident 1154 - Reported AI‑Generated Deepfake Impersonations of Public Figures Allegedly Used in Coordinated Stock Pump‑and‑Dump Scheme Targeting Israeli Investors (4/1/2025)
  • Incident 1155 - Purported AI‑Edited Police Evidence Image Posted to Facebook by Westbrook Police Department in Maine (7/1/2025)
  • Incident 1156 - Purported Deepfake Video Circulated Among Students Targets Orrington, Maine Educator (4/22/2025)
  • Incident 1157 - Google Gemini Reportedly Generates Sexual Role‑Play for Account Registered as Minor (7/14/2025)
  • Incident 1158 - Alleged Malicious Wiping Command Found in Amazon Q AI Assistant (7/17/2025)
  • Incident 1159 - Government‑Backed AI4Peat Mapping Tool Allegedly Misidentifies Granite Outcrops and Quarries as Peat (5/10/2025)
  • Incident 1160 - Reported AI-Aided Development of Explosive Devices by Long Island Resident Michael Gann (6/5/2025)
  • Incident 1161 - Airbnb Host Reportedly Accused of Using Purportedly AI‑Altered Photos in False Damage Claim (8/2/2025)
  • Incident 1162 - Purported Deepfake Depicts Altercation Between Bougainville President Ishmael Toroama and Papua New Guinea Prime Minister James Marape (4/7/2025)
  • Incident 1163 - Purported Face‑Swap Technology Reportedly Used to Circumvent Financial Platform's Facial Recognition Security in Nanjing, China (10/15/2024)
  • Incident 1164 - Google Healthcare AI Model Med‑Gemini Allegedly Produces Non‑Existent 'Basilar Ganglia' Term in Published Output (5/6/2024)
  • Incident 1165 - Grok Imagine Reportedly Produces Non-Consensual Taylor Swift Deepfake Nudes Without Explicit Prompting (8/5/2025)
  • Incident 1166 - ChatGPT Reportedly Suggests Sodium Bromide as Chloride Substitute, Leading to Bromism and Hospitalization (8/5/2025)
  • Incident 1167 - Alleged Gaggle Surveillance Alert Reportedly Leads to Arrest and Detention of 13-Year-Old Student in Fairview, Tennessee (8/15/2023)
  • Incident 1168 - Purportedly AI-Generated Image of British Army Colonels Captured in Ukraine Reportedly Circulates in Russian Media (8/4/2025)
  • Incident 1169 - Reported AI-Assisted Influence Campaigns by GoLaxy Allegedly Targeting Hong Kong and Taiwan Political Discourse (6/30/2020)
  • Incident 1170 - Chris Cuomo Amplifies Reportedly Labeled Deepfake Video of Alexandria Ocasio-Cortez, Purportedly Contributing to Misleading Political Narrative (8/6/2025)
  • Incident 1171 - Reported Hack of Tea Dating App Compromises Data from Purportedly AI-Supported Identity and Image Checks (7/25/2025)
  • Incident 1172 - Meta AI Bug in Deployed Service Reportedly Allowed Potential Access to Other Users' Prompts and Responses (12/26/2024)
  • Incident 1173 - Google Gemini Reportedly Exhibits Repetitive Self-Deprecating Responses Attributed to Bug (6/23/2025)
  • Incident 1174 - Microsoft Copilot Reportedly Able to Access Cached Data from Since-Private GitHub Repositories (2/26/2025)
  • Incident 1175 - Alleged Marine Park Orca Attack on 'Jessica Radcliffe' Reportedly an AI-Generated Hoax (8/9/2025)
  • Incident 1176 - Microsoft's Windows Recall Allegedly Stores Passwords and Social Security Numbers in Preview Mode (8/1/2025)
  • Incident 1177 - Purported AI Monitoring Software Reportedly Flags Unsent Joke Threat, Leading to Arizona Student Suspension (8/14/2025)
  • Incident 1178 - Google Gemini CLI Reportedly Deletes User Files After Misinterpreting Command Sequence (7/21/2025)
  • Incident 1179 - McDonald's McHire AI Recruitment Platform Reportedly Exposed Data of 64 Million Applicants via Default Login and API Vulnerability (6/30/2025)
  • Incident 1180 - Purported Meta AI Chatbot Persona 'Big sis Billie' Reportedly Engages in Romantic Roleplay and Provides Address, Linked to User's Fatal Fall (3/25/2025)
  • Incident 1181 - Purported AI-Generated Video Reportedly Depicts Illegal Tiger Sales in Bagerhat, Bangladesh (6/28/2025)
  • Incident 1182 - Purportedly AI-Generated Video of Tigers at Barasat Madrasa in West Bengal Reportedly Causes Panic and Student Absenteeism (7/30/2025)
  • Incident 1183 - Purported Error by Grok Reportedly Misrepresents Basketball Slang as Criminal Allegation Against NBA Player (4/16/2024)
  • Incident 1184 - Purported Fictitious AI-Generated Citations in Supreme Court of Victoria Murder Case Filing Lead to Delay and King's Counsel Apology (8/13/2025)
  • Incident 1185 - South Korean Actor Kim Seon-ho's Likeness Allegedly Misused in Purported Deepfake Impersonation Attempts Demanding Money (8/19/2025)
  • Incident 1186 - Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived (7/31/2025)
  • Incident 1187 - Google AI Overviews and ChatGPT Reportedly Cited Fraudulent Cruise Hotline, Allegedly Enabling Successful Scam (8/15/2025)
  • Incident 1188 - Multiple LLMs Reportedly Generated Responses Aligning with Purported CCP Censorship and Propaganda (6/25/2025)
  • Incident 1189 - Joann Fabrics Shoppers Reportedly Defrauded by AI-Generated Scam Sites, Part of Purported Wave of ~100,000 Fake Domains Across 194 Brands (8/20/2025)
  • Incident 1190 - Family Reportedly Discovers ChatGPT Logs Detailing Suicidal Ideation Prior to Daughter's Death (8/18/2025)
  • Incident 1191 - NYPD Facial Recognition System Allegedly Produced Erroneous Match That Reportedly Resulted in Wrongful Detention of Trevis Williams (4/21/2025)
  • Incident 1192 - 16-Year-Old Allegedly Received Suicide Method Guidance from ChatGPT Before Death (4/11/2025)
  • Incident 1193 - Purportedly Taxpayer-Funded Deloitte Report for Australian Government Contains Alleged AI-Generated Citations and Fabricated Legal Quote (8/22/2025)
  • Incident 1194 - L.A. Woman Reportedly Defrauded of $81,000 and $350,000 Condo Proceeds in Romance Scam Using Purported Deepfake Videos of Actor Steve Burton (10/1/2024)
  • Incident 1195 - Nigeria-Based YouTube Network Allegedly Uses AI Voiceovers and Anchors to Amplify Pro-Kremlin Narratives (9/1/2025)
  • Incident 1196 - Judge Reportedly Disqualifies Butler Snow Lawyers Following Purported Use of ChatGPT-Fabricated Citations in Alabama Prison Litigation (5/7/2025)
  • Incident 1197 - Alleged AI-Generated Photo of Burning Truck in Manila Reportedly Triggered Firefighter Response (4/26/2025)
  • Incident 1198 - Grok 3 Reportedly Generated Graphic Threats and Hate Speech Targeting Minnesota Attorney Will Stancil (7/8/2025)
  • Incident 1199 - Purportedly AI-Generated Deepfake Image Reportedly Falsely Links Canadian Prime Minister Mark Carney to Jeffrey Epstein (1/28/2025)
  • Incident 1200 - Meta AI on Instagram Reportedly Facilitated Suicide and Eating Disorder Roleplay with Teen Accounts (8/28/2025)
  • Incident 1201 - Anthropic Reportedly Identifies AI Misuse in Extortion Campaigns, North Korean IT Schemes, and Ransomware Sales (8/27/2025)
  • Incident 1202 - Russian Disinformation Campaign Reportedly Used AI-Generated Posts and Videos to Target 2025 Moldovan Parliamentary Elections (9/7/2025)
  • Incident 1203 - Carter County, Montana Man Reportedly Charged for Creating AI-Generated Child Sexual Abuse Material (8/21/2025)
  • Incident 1204 - ChatGPT Allegedly Reinforced Delusions Before Greenwich, Connecticut Murder-Suicide (8/5/2025)
  • Incident 1205 - Multiple Generative AI Systems Reportedly Amplify False Information During Charlie Kirk Assassination Coverage (9/11/2025)
  • Incident 1206 - Purported AI-Generated Deepfake of Spiritual Leader Sadhguru Used in Investment Scam Allegedly Defrauding Bengaluru Woman of ₹3.75 Crore (~$425,000) (2/25/2025)
  • Incident 1207 - Purported AI-Generated Deepfake of Irish Fine Gael Presidential Candidate Heather Humphreys Used in Fake Investment Videos on Meta Platforms (9/11/2025)
  • Incident 1208 - North Korea's Kimsuky Group Reportedly Uses AI-Generated Military ID Deepfakes in Phishing Campaign (7/17/2025)
  • Incident 1209 - Lawsuit Alleges Character AI Chatbot Contributed to Death of 13-Year-Old Juliana Peralta in Colorado (11/8/2023)
  • Incident 1210 - Malicious Nx npm Packages Reportedly Weaponize AI Coding Agents for Data Exfiltration (8/21/2025)
  • Incident 1211 - Google AI Overviews Reportedly Misrepresented Pizza Specials at Stefanina's in Wentzville, Missouri (8/19/2025)
  • Incident 1212 - Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play (9/20/2025)
  • Incident 1213 - Gaggle AI Monitoring at Lawrence, Kansas High School Reportedly Misflags Student Content and Blocks Emails (8/1/2025)
  • Incident 1214 - Donald Trump Reportedly Posts Purported AI-Modified Video of Chuck Schumer and Hakeem Jeffries During U.S. Government Shutdown Talks (9/29/2025)
  • Incident 1215 - Gaggle Alert Reportedly Leads to Arrest of 15-Year-Old in Volusia County, Florida, for School Threat the Student Claimed Was Not Serious (9/12/2025)
  • Incident 1216 - ChatGPT Reportedly Misleads Users About Soundslice Features, Allegedly Prompting Unplanned Product Development (7/7/2025)
  • Incident 1217 - Purportedly AI-Cloned Voice of Daughter Used in Elaborate Bond Scam Targeting Retired Couple in Hillsborough County, Florida (7/19/2025)
  • Incident 1218 - Microsoft 365 Copilot Vulnerability Allegedly Allowed File Access Without Audit Log (7/4/2025)
  • Incident 1219 - Meta Platforms Users Report Being Wrongfully Locked Out After Purported AI Moderation Flags Accounts for Child Exploitation Content (7/2/2025)
  • Incident 1220 - LAMEHUG Malware Reportedly Integrates Large Language Model for Real-Time Command Generation in a Purported APT28-Linked Cyberattack (7/10/2025)
  • Incident 1221 - Alleged AI-Enabled PRISONBREAK Influence Operation on X Reportedly Synchronizes Deepfake of Evin Prison Strike with Ongoing Attacks in Tehran (6/23/2025)
  • Incident 1222 - CFPB Reportedly Finds Hello Digit's Automated Savings Algorithm Caused Overdrafts and Orders Redress with $2.7M Penalty (8/10/2022)
  • Incident 1223 - Purportedly AI-Generated Deepfake Ads on Facebook Reportedly Impersonate Trump, Musk, Ocasio-Cortez, Warren, Sanders, and Leavitt to Promote Fraudulent Rebates (10/1/2025)
  • Incident 1224 - Purportedly AI-Generated Deepfake Ads on Instagram Impersonate Gisele Bündchen and Other Celebrities in Brazilian Fraud Scheme (10/1/2025)
  • Incident 1225 - Purportedly AI-Generated 'Home Invasion Prank' Images Reportedly Circulate in Ireland, Causing Panic and False Emergency Calls (10/3/2025)
  • Incident 1226 - Old Mutual Reportedly Warns of Purported Deepfake Videos Impersonating Chairman Trevor Manuel in Investment Scams (10/3/2025)
  • Incident 1227 - New Zealand Financial Markets Authority (FMA), Te Mana Tātai Hokohoko, Reportedly Flags Purported Deepfake Pump-and-Dump Network Using Social Media Ads (8/19/2025)
  • Incident 1228 - Alleged ChatGPT Misuse by Contractor Leads to Reported Data Exposure in New South Wales Resilient Homes Program (3/12/2025)
  • Incident 1229 - Gold Coast Man Reportedly Ordered to Pay $343,500 After Posting Purported Deepfake Pornographic Images of Australian Public Figures (9/26/2025)
  • Incident 1230 - Suspect in Palisades Fire Allegedly Consulted ChatGPT for Arson Tips and Legal Advice Before Blaze That Killed 12 and Destroyed 6,837 Structures (7/11/2024)
  • Incident 1231 - Purported AI-Generated Deepfake Video Reportedly Depicts Senator Chuck Schumer Endorsing Government Shutdown in NRSC Campaign Ad (10/17/2025)
  • Incident 1232 - Reportedly Fatal Xiaomi SU7 Ultra Crash in Chengdu Purportedly Involves Automated Driving Failure and Door Lock Malfunction (10/12/2025)
  • Incident 1233 - Purported Deepfake Video Allegedly Shows Conservative MP George Freeman Leaving Party for Reform UK (10/18/2025)
  • Incident 1234 - Purported AI-Generated Explicit Deepfakes of Sydney High School Students Reportedly Circulated Online (10/15/2025)
  • Incident 1235 - Chinese-Backed Operation Reportedly Used AI-Generated Deepfake Videos of Indian Stock Experts in Investment Fraud Campaign (7/1/2025)
  • Incident 1236 - Quantum AI Scam Reportedly Used AI-Generated Celebrity Endorsements and Spoofed Media Sites to Solicit Investments (1/1/2020)
  • Incident 1237 - Alleged Deepfake Video of Anthony Albanese Promotes Fake AUFIRST 'Tax Dividend' Trading Platform (8/4/2025)
  • Incident 1238 - OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions (10/10/2025)
  • Incident 1239 - Purported AI-Generated Deepfake of Steven Bartlett Reportedly Used to Promote Fake WhatsApp Investment Group (4/23/2025)
  • Incident 1240 - Purported AI-Generated Deepfake of Infosys Co-Founder N. R. Narayana Murthy Used in Investment Scam Allegedly Defrauding 79-Year-Old Bengaluru Woman of ₹35 Lakh (~$40,000) (6/27/2025)
  • Incident 1241 - Purported AI-Generated Video Reportedly Used in RM5,800 (~$1,400) Sextortion Attempt Targeting Malaysian Minor via Telegram (10/17/2025)
  • Incident 1242 - Purported AI-Generated Deepfake Videos Reportedly Used in Swedish Scam Campaign Impersonating Doctors Agnes Wold and Anders Tegnell (6/9/2025)
  • Incident 1243 - AWS Outage Reportedly Caused AI-Enabled Eight Sleep Smart Beds to Overheat and Malfunction (10/20/2025)
  • Incident 1244 - Purportedly AI-Generated 'King Trump' Fighter Jet Video Allegedly Posted by President Depicts Defecation Attack on 'No Kings' Protesters (10/19/2025)
  • Incident 1245 - Norwegian Student Reportedly Used AI-Generated Deepfake Videos in Spanish Coursework at University of South-Eastern Norway (8/2/2024)
  • Incident 1246 - Purportedly AI-Generated Deepfake Reportedly Used to Impersonate DNB Bank CFO and CEO in Live Teams Meeting (1/21/2025)
  • Incident 1247 - Meta AI Reportedly Generated Purportedly False Claims Linking Activist Robby Starbuck to January 6th Riot, Prompting Defamation Lawsuit (4/28/2025)
  • Incident 1248 - Google's Bard, Gemini, and Gemma AI Systems Allegedly Generated Defamatory Claims About Activist Robby Starbuck, Prompting Lawsuit (10/22/2025)
  • Incident 1249 - Virginia Candidate John Reid Reportedly Used AI-Generated Deepfake of Opponent Ghazala Hashmi in Simulated Political Debate (10/21/2025)
  • Incident 1250 - Alleged False Positive by Omnilert AI Gun Detection System Prompts Police Search at Baltimore County High School (10/20/2025)
  • Incident 1251 - Purportedly AI-Generated Hunting Regulation Errors Reportedly Lead to Idaho Citation and Multi-State Warnings from Wildlife Agencies (10/15/2025)
  • Incident 1252 - Judges in New Jersey and Mississippi Admit AI Tools Produced Erroneous Federal Court Filings (6/30/2025)
  • Incident 1253 - Large-Scale Mental Health Crises Allegedly Associated with ChatGPT Interactions (10/27/2025)

👇 Diving Deeper

  • Check out the Table View and List View for different ways to see and sort all incidents.
  • Explore clusters of similar incidents in Spatial Visualization.
  • Learn about alleged developers, deployers, and harmed parties in Entities Page.

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook.
  2. Submit incidents to the database.
  3. Contribute to the database’s functionality.

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 353a03d