AI Incident Roundup – June and July 2025
Garden at Giverny, J.L. Breck, 1887
🗄 Trending in the AIID
Across June and July 2025, the AI Incident Database added over sixty new incident IDs. These entries capture a range of harms, from sophisticated state-directed campaigns to opportunistic fraud and platform-level failures. Some are ongoing; others surface only belatedly, as investigative reporting or institutional disclosures catch up to events that have already shaped their environments, or, as is often the case, only when we ourselves encounter reporting on them. The integration of AI into economic life and everyday communication appears, in these incident records, to have deepened the channels through which harm can be authored. And as in prior months, the pattern is not one of discrete shocks so much as a sustained background erosion of reliability in the systems and relationships through which we navigate the world.
North Korea's AI‑Assisted Corporate Infiltration
One incident ID from this period merits special treatment for the way it captures some of these dynamics. Incident 1118 documents the ongoing use of alleged AI-assisted identity fraud by purported North Korean IT workers to gain unauthorized access to Western companies. The reports that anchor this entry describe DPRK operatives using AI-generated resumes, altered profile photographs, and chatbot-assisted interview techniques to secure remote employment or impersonate legitimate employers. These positions are then allegedly used to deploy malware such as OtterCookie, to exfiltrate credentials, or to siphon wages in violation of sanctions regimes. The incident ID itself is functioning as a cluster-level record in that it pulls together multiple overlapping operations that date back to at least 2021. While some tactics reportedly date back to 2014, reports of AI integration began appearing in 2021 after remote work had accelerated during the COVID‑19 pandemic throughout 2020. Something that distinguishes this entry beyond its technical detail (e.g., VPN cloaking, facial manipulation, real‑time conversational assistance, laptop farming through Western intermediaries) is its scale and integration into a deep and diffuse, alleged state-directed strategy to evade sanctions and generate foreign currency. Public reporting estimates that thousands of actors may be involved, operating through freelance platforms, third‑party facilitators in Western countries, and front companies. The result is a persistent infiltration surface that marries the affordances of commercial AI tools with disciplined operational tradecraft. It is an embedded capability with adaptive tactics, one that has already reshaped how both government agencies and private firms must think about hiring and their particular processes for credential verification, as well as how insider threats are evolving in alarming ways that present real vulnerabilities.
Incident 1118 is also a reminder of the methodological work involved in defining and bounding an incident. While it could be broken down into multiple discrete records as new details emerge (see 1117), its current form recognizes that this is a slow-moving, protean harm environment. The AI‑assisted elements, particularly the role of facial alteration and scripted interview responses, are the levers by which infiltration becomes plausible at scale, especially in an era where remote hiring has become normalized. The consequence is twofold, one being the immediate compromise of individual companies (especially if reporting the breach itself presents legal risks), and the other a gradual, diffused weakening of institutional capacity to rely on conventional identity markers. That weakening is, in itself, a form of harm.
Weaponized Familiarity and the Compression of Social Trust
Much of the rest of this period's additions reinforce continuing observations about the diversification of AI‑mediated fraud. Identity manipulation remains a central vector. We have seen repeated cases of scammers synthesizing the voices of relatives or high-ranking government officials to extract money or information (1112, 1141), creating whole‑cloth identities to front scam operations (Incidents 1122, 1131), or forging support interfaces that pass as legitimate points of contact (Incidents 1111, 1121). In these reported cases, such tactics appear to compress social trust into a single technical surface. The voice sounds familiar because it has been generated to evoke that very familiarity, often in moments engineered for maximum emotional leverage. The result is a form of harm that exceeds the immediately material in that it is the weaponization of personal knowledge that is scaled and automated.
Incident 1141 extends a pattern documented in earlier reporting, functioning as a variant of both the cluster of vishing‑and‑smishing incidents campaign described in Incident 1077 and the more narrowly focused breach of White House Chief of Staff Susie Wiles in Incident 1085. Sometime in the middle of June, per available reporting, an unknown actor used AI‑assisted voice cloning and text generation to impersonate Secretary of State Marco Rubio, contacting at least five high‑level officials (three foreign ministers, a U.S. governor, and a member of Congress, according to reports) via Signal and text messages. As in 1085, the impersonation reportedly sought to manipulate trusted networks into disclosing sensitive information or providing account access. The resemblance to 1077's campaign is in the purported use of synthetic voice as a plausibility amplifier and of messaging platforms as the delivery channel. High‑profile targets can be swapped in or out while the technical and social‑engineering scaffolding remains consistent.
Synthetic Media in Informational Vacuums
Parallel to these targeted frauds are the sprawling, often cross‑border disinformation campaigns that continue to incorporate AI‑generated media. Political figures across multiple continents have appeared in manipulated videos purporting to show endorsements, confessions, or inflammatory statements (1091, 1092, 1093, 1094, 1095, 1096, 1129, 1133, 1134, 1140, 1150). In some cases, these reported deepfakes are linked to identifiable state‑aligned operations (in particular the aforementioned 1117, 1118); in others, they are part of loosely affiliated, commercially motivated networks (1119, 1124, 1136). The Burkina Faso cluster (1091 to 1096), involving multiple reported AI‑generated audio recordings in a collapsed media environment, is a prime example of how such material can fill informational vacuums. Even when the provenance of a clip is doubted, repetition and circulation embed it in the texture of public discourse. That embedding has consequences in that the harm resides in the way perception itself becomes unstable.
These operations often attract less sustained scrutiny than high‑visibility events, yet their cumulative impact is substantial, especially when they circulate in regions of the world marked by conflict and fragile institutions, as well as low international media penetration. This is particularly visible when set against incidents that do capture outsized attention, such as Incident 1146, in which xAI's Grok chatbot was, according to reports, generating antisemitic posts praising Hitler in the wake of the Central Texas floods of July. That case received sustained media coverage and rapid platform response. But the focus it commanded stands in marked contrast to the quieter but no less real harms still unfolding elsewhere, both in terms of sectors of society or in regions with less journalistic coverage. We can again point to the sustained circulation of pro‑junta deepfake campaigns in Burkina Faso (1091 to 1096), as well as the purported use of AI‑generated imagery to depict Rwanda's president, Paul Kagame, in an M23 rebel uniform (1098), or the alleged AI‑generated betting endorsements attributed to Bangladesh's chief adviser, Muhammad Yunus (1120). These all point to how synthetic media can embed itself in local discourse with little external scrutiny. As a methodological consideration for approaching AI incidents, coverage itself becomes a distorting lens that creates the impression that certain categories of harm are episodic and exceptional, while others are minor and therefore somehow less urgent. This is an asymmetry that suggests that what counts as an "incident" in public consciousness is shaped less by the intrinsic nature of the harm than by the narrative conditions under which it becomes legible, leaving a vast hinterland of harms that remain structurally invisible.
Reinforcement of Delusional Thinking and Psychological Harm
A recurring pattern described in recent incident reports involves AI systems that, according to those accounts, appear to amplify or affirm unstable beliefs in ways that may accelerate mental‑health deterioration. In multiple cases, extended interaction with chatbots reportedly reinforced delusional frameworks, encouraged unsafe behavior, or validated speculative theories without offering any grounding or reality‑testing (1106, 1151). This dynamic is often underpinned by the systems' tendency toward what is often being referred to as sycophancy, where the chatbot mirrors a user's worldview or emotional register in order to sustain engagement, and which results in the production of an echo chamber effect that can deepen psychological distress. In some reports, users experiencing paranoia, mania, or mystical ideation were met with further elaboration and encouragement; the chatbot did not reportedly provide cautionary or stabilizing responses (1106, 1151).
In one case, ChatGPT is reported to have generated elaborate ritual scripts containing explicit instructions for self‑harm and symbolic violence when prompted about occult practices, purportedly supplying what amounted to a harmful blueprint (1149). Such interactions can blur the boundary between imaginative play and dangerous enactment, particularly for those already in vulnerable psychological states. While the system's outputs are synthetic, the escalation they facilitate is real. The reported pattern here has not seemingly been one of overt incitement. It has more so been that the model's uncritical assent gradually and intimately strengthens the scaffolding of a user's private delusion. In effect, in these reported cases, the chatbot's fluency appeared to lend an air of plausibility to unstable ideas, which participants described as helping to transform what might otherwise remain a passing fixation into a more structured belief system.
Epistemic Pollution in Technical and Scientific Domains
Institutional misuse and failure form another throughline. We continue to track instances of confabulated citations, both from previous years and some more contemporaneous, appearing in legal filings (1099, 1137, 1138, 1139, 1145). These are symptoms of a willingness to defer to the simulated authority of AI‑generated text over empirical proof. The risk here beyond immediate judicial sanctions and reputational damage is the accretion of institutional epistemic failures of discipline, especially when the habit extends to other domains (see also 1027 and 1084 from previous reporting).
In the scientific and technical sphere, we previously noted the reported appearance of the nonsensical term "vegetative electron microscopy" in published literature, traced to a decades‑old digitization error compounded by later translation mistakes and apparently reinforced by contaminated training data (1044). In this period, we note an adjacent case in which, according to reporting, at least 17 preprints on arXiv, authored by researchers from multiple universities, allegedly contained covert prompts (some embedded in white text or tiny fonts), which were seemingly intended to induce AI‑assisted peer‑review tools to return only positive assessments (1135). We also note reports of AI‑generated research tools producing flawed outputs with tangible consequences, such as misclassifying critical Veterans Affairs contracts as expendable, a decision reportedly followed by the termination of essential healthcare and research services (1103). These episodes illustrate a subtler but equally damaging form of epistemic harm, one in which knowledge environments may become polluted in ways that are not immediately visible or felt. Once embedded, such errors can replicate across derivative work, altering the archive of accepted facts without triggering conventional alarm systems.
This fraying undermines the legitimacy of institutional decision‑making and also encourages the process of treating text as a simulacrum of authority; it risks becoming a performative shell presented as if it carries evidentiary weight. Stripped of the reasoning and verification that give language its force by grounding it in reality, such text invites a bending of reality into distortion. The depletion of meaning we see in technical and institutional life finds its counterpart in literature's long‑standing refusal to allow language to be reduced to rote pattern. This resistance to formula operates on the same register as the need to preserve integrity in knowledge systems. Both depend on resisting the collapse of distinct expression into standardized forms. Joseph Brodsky reminded his 1987 Nobel audience that literature "abhor[s] repetition" and, in recognizing the "uncommon visage" of the individual, resists the tautologies by which mass systems seek to absorb us. What Brodsky is getting at is that literature, at its best, resists the pressure to smooth over individuality into a standardized idiom. It insists on the distinctiveness of expression as a way of keeping thought alive to itself. In Brodsky’s terms, this discipline can be understood as one safeguard against the hollowing‑out of meaning. Neglecting it risks eroding the conditions that allow thought to be genuinely one's own, and with them the privacy of mind on which knowledge depends. Seen this way, the point is less that such erosion is inevitable than that it is worth resisting wherever possible, even in the small, procedural habits that shape how we treat text as evidence. Without that safeguard, decisions are made on the strength of text that only looks authoritative, leaving reality itself to be administered through fictions.
Spectacle and the Uneven Map of Attention
Taken together, the incidents from this period suggest that AI‑mediated harm is not so much defined by the novelty of the tools but by their normalization. Identity fraud, synthetic media, destabilizing information streams, and simulated authority are now operational realities. They thrive in an environment where legitimacy can be manufactured on demand and where institutional responses are often triggered by moments of public spectacle rather than by long‑running threats. The result is a skewed map of attention. A chatbot's inflammatory posts, which are harmful in their own right, can dominate headlines while state‑linked infiltration campaigns advance quietly, year after year, with far less scrutiny.
The common thread is the continuing degradation of epistemic stability, of trust in identity, in personhood itself, and in information provenance. This is a form of ambient epistemic harm; it is a persistent corroding force that reshapes the conditions under which crises are recognized. It coexists with, and often amplifies, discrete harms that must be tracked, even if it does not always announce itself as a crisis in its own right. That recognition gap matters in that it determines where resources are allocated and which risks are regulated, and importantly how the public comes to imagine the nature of AI harm and its many facets.
Incident documentation can be understood as a countermeasure to the distortions produced by selective attention. When we place the spectacular and the slow‑moving within the same frame, we might better resist the gravitational pull of media spectacle while still drawing on it as a necessary source of visibility and evidence. Guy Debord described the spectacle in 1967 as an all‑encompassing environment in which individual agency dissolves. He writes:
The alienation of the spectator, which reinforces the contemplated objects that result from his own unconscious activity, works like this: the more he contemplates, the less he lives; the more he identifies with the dominant images of need, the less he understands his own life and his own desires. The spectacle's estrangement from the acting subject is expressed by the fact that the individual's gestures are no longer his own; they are the gestures of someone else who represents them to him. The spectator does not feel at home anywhere, because the spectacle is everywhere. (The Society of the Spectacle, trans. Ken Knabb, pp. 10-11)
In the critical tradition that treats the spectacle as a way of understanding mediated life, distraction and attention are often taken as its central mechanism. Yet the spectacle works on a deeper register than diversion alone. Rather than simply diverting attention, it reshapes the criteria by which reality itself is recognized. One way to read Debord here is to see the spectacle less as a passing disturbance and more as a standing architecture of perception, setting the terms for what enters public awareness and what slips away unnoticed. It supplies the grammar through which reality is rendered thinkable. What cannot be expressed in that grammar risks disappearing from perception altogether. In such a condition, attention is pre‑shaped by dominant images, and the very language available to name what lies outside them begins to narrow. Editing AI incident reports is, in part, an effort to interrupt that narrowing, however imperfectly. It seeks to expand the lexical capacity for naming and situating harms that might otherwise remain illegible, and keeps them in view long enough to be recognized and addressed before they calcify into the background.
🗞️ New Incident IDs in the Database
- Incident 1091 - Purported Synthesia Avatars Used in Alleged Pro-Junta Deepfake Video Supporting Ibrahim Traoré in Burkina Faso (2023-01-23)
- Incident 1092 - Purported AI-Generated Video Portrays Pope Leo XIV Addressing Ibrahim Traoré of Burkina Faso (2025-05-23)
- Incident 1093 - Fact-Checkers Identify Viral Photo of Burkina Faso's Ibrahim Traoré with 'Wife and Children' as AI-Generated Composite (2025-04-26)
- Incident 1094 - At Least 294 Purported AI-Generated Music Videos Portray Celebrities Praising Burkina Faso's Ibrahim Traoré (2025-03-02)
- Incident 1095 - Alleged Russian-Linked Deepfake Campaign Misattributes Infrastructure Projects to Burkina Faso's Ibrahim Traoré (2025-05-30)
- Incident 1096 - Purported AI-Generated Video Depicts Ibrahim Traoré of Burkina Faso Delivering Speech on Western Exploitation (2025-05-25)
- Incident 1097 - Google AI Overview Reportedly Misstates Aircraft Manufacturer as Airbus Instead of Boeing in Air India Flight 171 Crash (2025-06-12)
- Incident 1098 - Image Purporting to Show President Paul Kagame of Rwanda in M23 Uniform Reportedly AI-Generated (2025-01-10)
- Incident 1099 - Factum in Ko v. Li Allegedly Contains AI-Generated Case Law Citations (2025-04-25)
- Incident 1100 - Reported Audio Deepfake Impersonating CEO Karim Toubba Targets LastPass Employee via WhatsApp (2024-04-10)
- Incident 1101 - Meta AI App Reportedly Publishes Personal Chats Without Users Fully Realizing (2025-04-29)
- Incident 1102 - Purported Deepfake Featuring Dr. Rinki Murphy and Jack Tame Reportedly Used to Promote Diabetes Scam in New Zealand (2025-04-30)
- Incident 1103 - Purported DOGE Contract Review Tool Cited in Reports of AI-Driven Misjudgments in VA Budget Cuts (2025-03-18)
- Incident 1104 - AI Chatbot Allegedly Used to Research Explosive Materials in Palm Springs Fertility Clinic Bombing (2025-05-17)
- Incident 1105 - Michigan Woman Defrauded in Alleged Tinder Romance Scam Using Purportedly AI-Generated Video Calls (2025-02-10)
- Incident 1106 - Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm (2025-06-13)
- Incident 1107 - Bulgarian Tennis Player Grigor Dimitrov Alleges Deepfake Scam Promoting Fraudulent Investment Scheme Using His Likeness (2025-06-13)
- Incident 1108 - Digital Rights Groups Accuse Meta and Character.AI of Facilitating Unlicensed Therapy via Chatbots (2025-06-10)
- Incident 1109 - Year-long AI Surveillance Pilot in Two South Australian Aged Care Facilities Reportedly Overwhelmed Staff with False Positives (2021-03-01)
- Incident 1110 - Alleged AI-Manipulated Video Uses Macau Chief Executive Sam Hou Fai's Likeness in Investment Scam (2025-06-18)
- Incident 1111 - Reported AI-Generated Video Call Impersonation of Cryptocurrency Analyst Leads to Alleged Malware Installation and Account Theft (2025-06-19)
- Incident 1112 - Scammer Reportedly Used AI Voice Clone of WCPO Cincinnati Meteorologist in Facebook Fraud Attempts (2025-06-18)
- Incident 1113 - Financial Times Journalist Martin Wolf Reports AI-Generated Investment Scam Using His Likeness on Instagram and Facebook (2025-03-09)
- Incident 1114 - Alleged AI-Generated Video Ads Impersonate Bank of Montreal Strategist Brian Belski to Promote Investment Scam (2025-06-12)
- Incident 1115 - Alleged Deepfake Investment Scam Uses Economist David Rosenberg's Likeness on Meta Platforms (2025-04-15)
- Incident 1116 - Alleged Deepfake Video Depicts Former Malaysian Inspector-General of Police Tan Sri Acryl Sani Abdullah in Financial Misconduct Context (2025-06-20)
- Incident 1117 - North Korea-Linked Actors Allegedly Use AI Executive Deepfakes in Zoom Phishing Targeting Web3 Employee (2025-06-22)
- Incident 1118 - Ongoing Purported AI-Assisted Identity Fraud Enables Unauthorized Access to Western Companies by North Korean IT Workers (2021-01-01 to the present)
- Incident 1119 - Alleged Deepfake Videos Impersonate Lithuanian Politicians and Doctors in Purported Cross-Border Scam Network (2025-06-28)
- Incident 1120 - Alleged AI-Generated Videos Depict Bangladesh's Chief Adviser Muhammad Yunus Endorsing Betting Platforms (2025-06-16)
- Incident 1121 - Alleged AI Deepfake Videos Used to Lure Simcoe County, Ontario Residents in Crypto Scam (2025-06-29)
- Incident 1122 - Reportedly Sustained Multi-Celebrity Deepfake Persona Scam Targeting Vulnerable Southampton Resident (2025-06-28)
- Incident 1123 - Reported Student Misuse of ChatGPT, StudyX, and Gemini to Obtain Answers During Vietnam's 2025 National High School Graduation Exam (2025-06-26)
- Incident 1124 - Alleged Unauthorized Deepfake AI Clones of Ravish Kumar Used to Spread Purportedly Fabricated News on YouTube (2025-06-30)
- Incident 1125 - Reported Use and Circulation of AI-Generated Misinformation and Fake Victim Visuals After Air India 171 Crash (2025-06-12)
- Incident 1126 - Reported Use of Deepfake Video Impersonating Owen Wilson in Romance Scam with Fake Job Payments (2025-05-16)
- Incident 1127 - Purported AI-Generated Audio Clip Allegedly Portrays Cambodian Senate President Hun Sen Raising Money for Conflict with Thailand (2025-06-27)
- Incident 1128 - Philippine Officials Reportedly Share Veo 3-Generated Video to Support Vice President Sara Duterte During Impeachment (2025-06-15)
- Incident 1129 - Purported AI-Generated Video Depicting Philippine President Ferdinand Marcos Jr. Using Drugs Shared by Rodrigo Duterte Supporters and Amplified by China-Linked Spamouflage (2024-07-21)
- Incident 1130 - Docomo Pacific CEO Reports Mother Targeted by Purported AI-Enabled Scam in Guam (2025-03-13)
- Incident 1131 - Alleged Deepfake Identity Scam Uses Miami Beach Realtor's Likeness to Defraud Victim in the United Kingdom in Purported Romance Scam (2025-04-21)
- Incident 1132 - Delhi Man Reportedly Arrested for Sharing Purportedly AI-Morphed Obscene Images of Ex-Girlfriend (2025-07-02)
- Incident 1133 - Reported AI-Generated Audio of Ukrainian Commander Andriy Biletsky Used in Russian Disinformation Campaign (2025-06-30)
- Incident 1134 - Reported Deepfakes of Ukrainian Deputy PM Olha Stefanishyna Allegedly Supporting Fictional Mobilization Plan for Women (2025-06-30)
- Incident 1135 - Preprints Reportedly from Researchers from Multiple Universities Allegedly Contain Covert AI Prompts (2025-07-01)
- Incident 1136 - Purported Widespread Use of AI-Generated Deepfake Videos Impersonate Malaysian Leaders in Investment Scams (2025-07-04)
- Incident 1137 - Reported False Legal Citations in South African Case Mavundla v. MEC Generated by Purported AI Tool (2024-09-20)
- Incident 1138 - South African Legal Team Reportedly Relied on Unverified ChatGPT Case Law in Johannesburg Body Corporate Defamation Matter (2023-03-01)
- Incident 1139 - South African Legal Team for Northbound Processing Reportedly Admits Legal Genius AI Produced Purportedly Non-Existent Case Law in Urgent Court Filing (2025-06-30)
- Incident 1140 - Purported Deepfake of Sri Lankan President Anura Kumara Dissanayake Promotes Fraudulent Government Investment Scheme (2025-06-18)
- Incident 1141 - Purported AI Voice Cloning Used to Impersonate Secretary of State Marco Rubio (2025-06-15)
- Incident 1142 - Chicago Veteran Reportedly Loses $10,000 in Purported Deepfake Cryptocurrency Fraud Posing as Elon Musk (2025-07-07)
- Incident 1143 - Former Namibian First Lady Monica Geingos's Likeness Reportedly Used in Purported AI-Generated Video Investment Scams (2025-01-22)
- Incident 1144 - xAI Allegedly Operates Unpermitted Methane Turbines in Memphis to Power Supercomputer Colossus to Train Grok (2024-06-05)
- Incident 1145 - MyPillow Defense Lawyers in Coomer v. Lindell Reportedly Sanctioned for Filing Court Document Allegedly Containing AI-Generated Legal Citations (2025-02-25)
- Incident 1146 - Grok Chatbot Reportedly Posts Antisemitic Statements Praising Hitler on X (2025-07-08)
- Incident 1147 - Purportedly AI-Generated Videos Impersonate Brunei Police in 'Real Money Magic' Scam on Social Media (2025-07-14)
- Incident 1148 - Purported Deepfake Scam Videos Depict JG Summit Holdings President and CEO Lance Gokongwei Allegedly Endorsing Illicit Investments (2025-07-11)
- Incident 1149 - ChatGPT Reportedly Generated Ritual Scripts Containing Instructions for Self-Harm and Symbolic Violence in Response to Thematic Prompts (2025-07-22)
- Incident 1150 - Purported Deepfake of Barack Obama's Detention Reportedly Amplified by Donald Trump via Truth Social (2025-07-20)
- Incident 1151 - ChatGPT Reportedly Validated Autistic User's Faster-Than-Light Theory and Failed to Provide Grounding During Delusional Episode, Preceding Hospitalization (2025-05-26)
- Incident 1152 - LLM-Driven Replit Agent Reportedly Executed Unauthorized Destructive Commands During Code Freeze, Leading to Loss of Production Data (2025-07-18)
👇 Diving Deeper
- Check out the Table View and List View for different ways to see and sort all incidents.
- Explore clusters of similar incidents in Spatial Visualization.
- Learn about alleged developers, deployers, and harmed parties in Entities Page.
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook.
- Submit incidents to the database.
- Contribute to the database’s functionality.