AI Incident Roundup – June 2024
Our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
🗄 Trending in the AIID
June was a busy month for AI incident reporting, with numerous new harms being documented and efforts to address a backlog of past incidents continuing. In total, this month saw the addition of 37 new incident IDs across a variety of broad categories, including 9 related to disinformation and deepfakes, 7 to algorithmic bias and failures, 5 to privacy violations and unauthorized use, 5 to fraud and scams, 8 to safety and reliability concerns, and 3 to education and academia. Naturally, these counts only represent a snapshot of a far greater and likely unknowable total, and tracking a portion of that which was reported in the media.
Three academic-related incidents arose over the past month that suggest increasingly creative but detrimental uses of generative AI in the educational space beyond some of the usual reporting. Early in June, a Turkish student in Isparta was arrested for allegedly using AI to cheat on an exam (Incident 705). The way the student did so was by using a router, mobile phone, earphone, and a button-shaped camera to transmit exam questions to ChatGPT in order to receive answers in real-time while sitting for the state university entrance exam. Reports have also surfaced of fake AI-generated students enrolling in online college classes (Incident 721). Presumably these situations might be for fraudulently extracting financial aid, but they also further raise concerns about the security and authenticity of educational environments. Another incident we have been tracking has to do with AI-generated papers manipulating Scopus rankings in top philosophy journals (Incident 724). Three reportedly fake journals manipulated the journal rankings by extensively cross-citing each other and then using AI-generated papers filled with buzzwords. These journals, placed in the top 10 of Scopus's 2023 CiteScore philosophy list, featured fake authors, affiliations, and grant numbers. As a result, the fake journals pushed legitimate ones to lower tiers, which in principle if not in effect harms honest academic evaluations and awards.
Similarly, June saw advancements in scam tactics as well as failures in fraud prevention technology. Scammers are using AI tools to create convincing fraud schemes, making them harder to detect. AI-generated messages and fake identities bypass traditional scam indicators. Incidents include the impersonation of senior executives and job scams, which entail financial losses and identity theft. Banks are adopting AI to combat these scams, but the sophistication of AI-driven fraud continues to pose significant challenges (Incident 735), including ongoing movements by cartels that are expanding their operations into AI-fueled fraud (Incident 725). One individual incident from the end of June involved scammers generating a deepfake of Hong Kong entertainer Andy Lau, which was used to steal NT$2.64 million (or about $81,000) from a fan in Taiwan (Incident 739). In terms of a larger scale, collective failure for a tool meant to prevent fraud, in the United Kingdom an algorithm used by the Department for Work and Pensions (DWP) was found to have wrongly flagged 200,000 individuals for housing benefit fraud (Incident 738). The algorithm marked those individuals as high risk, which resulted in unnecessary, costly investigations. Two-thirds of these flagged claims were legitimate, which resulted in wasted public funds and resources, as well as significant stress for claimants.
One particularly egregious incident that received further updates a few years after the initial known harm event in 2021 is Incident 716, which pertains to algorithmic staffing failures linked to resident deaths at a prominent senior care chain. The company's algorithm-based staffing system reportedly left facilities understaffed, leading to many critical incidents of neglect, injuries, and deaths. Those incidents have prompted lawsuits and raised serious concerns about the reliability and ethical implications of using AI in critical healthcare settings.
We have also seen some incidents arise pertaining to automated transcription software. One such failure involved hallucinations of violent imagery, as documented by Cornell researchers (Incident 732). A faulty AI transcription system used by the Italian judiciary threatened the integrity of a major bribery probe in Genoa (Incident 708), mixing up "illicit" and "licit." Subtle but major errors in automated technologies of this nature pose critical vulnerabilities that continue to strongly encourage human review and oversight.
Some other situations worth mentioning: In addition to three new incident IDs for autonomous vehicle-related reports (Incidents 707, 711, and 726), there were also newly reported revelations about auto insurers allegedly collecting and scoring driver data using AI without disclosing that was happening (Incident 733). Additionally, ongoing problems on the information integrity front have emerged in that it was discovered that leading AI models were mimicking Russian disinformation in a significant percentage of cases while also citing fake Moscow news sites (Incident 734). An audit by NewsGuard revealed that leading chatbots have been repeating Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (see Incident 701 from last month). The audit tested 570 prompts across 10 AI chatbots, the findings of which demonstrate that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
These are just some examples. See the full list below for all new additions.
🗞️ New Incident IDs in the Database
- Incident 703 (1/13/2024) Deepfake Audio Sparks False Claims of Biden Threatening Texas with F-15s
- Incident 704 (5/23/2024) Study Highlights Persistent Hallucinations in Legal AI Systems
- Incident 705 (6/8/2024) Turkish Student in Isparta Allegedly Uses AI to Cheat on Exam, Leading to Arrest
- Incident 706 (4/1/2024) Scammers Using AI to Impersonate Small Businesses
- Incident 707 (6/13/2024) Tesla Reportedly in Autopilot Mode Hits Parked Police Vehicle in Fullerton, California
- Incident 708 (5/26/2024) Faulty AI Transcription Threatens Integrity of Genoa Bribery Probe
- Incident 709 (5/28/2024) Unrepresented Litigant Misled by ChatGPT-Generated False Legal Citations in Manchester Court
- Incident 710 (4/15/2024) Facebook AI Mislabels Auschwitz Photos as "Bullying" and "Nudity"
- Incident 711 (4/26/2024) NHTSA Opens New Probe into Tesla's Autopilot Following More than a Dozen Fatal Accidents
- Incident 712 (4/26/2024) Meta AI Hallucinates Harassment Allegations Against New York Politicians
- Incident 713 (2/27/2023) Deepfake Video Falsely Depicts Biden Announcing National Draft for Ukraine
- Incident 714 (3/29/2024) Microsoft-Powered New York City Chatbot Advises Illegal Practices
- Incident 715 (3/1/2024) Over 400 AI-Driven Scams Reportedly Led to $8M Loss for Australians in 2023
- Incident 716 (4/21/2021) Algorithmic Staffing Failures Linked to Resident Deaths at Leading Assisted-Living Chain Brookdale
- Incident 717 (3/1/2024) Fake AI-Generated Law Firms Sent Fake DMCA Notices to Increase SEO
- Incident 718 (4/6/2024) OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
- Incident 719 (4/4/2024) Grok AI on X Created and Promoted False Iran Missile Strike News
- Incident 720 (2/27/2023) Deepfake Video Targets Paul Vallas on Eve of Chicago Mayoral Election
- Incident 721 (6/4/2024) Fake AI-Generated Students Are Reportedly Enrolling in Online College Classes
- Incident 722 (4/25/2024) Catholic AI Chatbot 'Father Justin' Claimed to Be a Real Priest, Prompting Retraction
- Incident 723 (5/13/2024) Instagram Algorithms Reportedly Directed Children's Merchandise Ad Campaign to Adult Men and Sex Offenders
- Incident 724 (6/12/2024) AI-Generated Papers Manipulate Scopus Rankings in Top Philosophy Journals
- Incident 725 (3/14/2024) Cartels Reportedly Using AI to Expand Operations into Financial Fraud and Human Trafficking
- Incident 726 (10/2/2023) A Self-Driving Cruise Robot Taxi Reportedly Struck and Dragged a Pedestrian 20 Feet
- Incident 727 (4/1/2024) Synthetic Voice 'Olesya' by Storm-1516 Falsely Accuses Ukraine in U.S. Election Disinformation Campaign
- Incident 728 (5/16/2024) AI Firm Lovo Accused of Illegally Replicating Voice Actors' Voices
- Incident 729 (5/14/2024) GPT-4o's Chinese Tokens Reportedly Compromised by Spam and Pornography Due to Inadequate Filtering
- Incident 730 (4/1/2024) AI Deepfakes for Voter Outreach Flood Indian Elections
- Incident 731 (12/1/2023) Hallucinated Software Packages with Potential Malware Downloaded Thousands of Times by Developers
- Incident 732 (2/12/2024) Whisper Speech-to-Text AI Reportedly Found to Create Violent Hallucinations
- Incident 733 (6/9/2024) Auto Insurers Allegedly Are Surreptitiously Collecting and Scoring Driver Data
- Incident 734 (6/18/2024) Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 735 (6/22/2024) AI Enhances Scammer Tactics Making Detection Harder
- Incident 736 (12/1/2023) Underground Market for LLMs Powers Malware and Phishing Scams
- Incident 737 (4/18/2024) Amandine Le Pen Deepfake Account Misleads Thousands on TikTok
- Incident 738 (6/23/2024) Department for Work and Pensions (DWP) Algorithm Wrongly Flags 200,000 for Housing Benefit Fraud
- Incident 739 (6/27/2024) Scammers Use Deepfake of Hong Kong Entertainer Andy Lau to Steal NT$2.64 Million from Fan
👇 Diving Deeper
- Check out the Table View and List View for different ways to see and sort all incidents.
- Explore clusters of similar incidents in Spatial Visualization.
- Learn about alleged developers, deployers, and harmed parties in Entities Page.
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook.
- Submit incidents to the database.
- Contribute to the database’s functionality.