AI Incident Roundup – July 2024
Our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
🗄 Trending in the AIID
July saw 20 new incident IDs added to the AI Incident Database. Broadly speaking, they include 6 cases related to deepfake misuse; 6 failures in information accuracy and reliability; 3 cases related to discrimination, bias, and privacy violations; and 5 safety, legal, and financial risk cases. As with previous months, these incidents represent only a fraction of the unknowable real total, given the ongoing integration of AI into various sectors and the emerging risks that accompany this expansion, but also in terms of what's being reported and what we have been able to edit. While a number of incidents arose in July itself, some of the new IDs have been processed from the long-running backlog of incidents.
We have continued documenting some of the reporting on prominent deepfake harms and the unethical use of AI-generated content. In line with the significant events in British and American politics, deepfake technology was weaponized in political contexts, targeting figures like British female politicians and U.S. Vice President Kamala Harris in disinformation campaigns (Incidents 754 and 756). In another example of Russian influence operations, a deepfake video falsely suggesting that Olena Zelenska, the First Lady of Ukraine, purchased a luxury car, circulated widely online (Incident 755). The video was reportedly part of a Russian-linked disinformation campaign aimed at undermining Ukraine and its supporters.
Failures in the reliability and accuracy of AI-driven information systems also made headlines last month. For example, the recent incidents involving AI chatbots during a period of intense political events were case studies in the various challenges in using AI for real-time news dissemination. Incident 742 details how xAI's Grok model struggled with accuracy during the Trump assassination attempt in that it was reportedly producing misleading headlines and amplifying unverified claims. Likewise, Incident 750, while related, expands on these issues by showing that multiple AI chatbots, not just Grok, also faltered in delivering accurate updates, particularly in the chaotic aftermath of the Trump rally shooting and Biden's campaign withdrawal. The current limitations of AI in handling fast-paced and high-stakes news environments suggest a much more cautious approach to their deployment in critical real-time scenarios is required. The overlap between these incidents suggests that the problems are systemic across various AI platforms, rather than isolated to a single model.
Another news-related incident worth mentioning is that AI-generated obituaries on various websites are reported to have compounded the grief of bereaved families by spreading incorrect and unauthorized information about their loved ones (Incident 752). These obituaries, produced without the families' knowledge, often contain errors and appear on ad-filled sites.
Discrimination, bias, and privacy violations remain persistent issues in the deployment of AI technologies as well. A notable case involved the Department for Work and Pensions (DWP) in the United Kingdom, where AI systems were accused of discriminating against single mothers, exacerbating existing inequalities (Incident 740). Privacy concerns were also raised with AI tools like Gemini allegedly accessing Google Drive files without user consent (Incident 743).
In terms of safety, legal, and financial risks, several incidents that remind us of the potential dangers of AI in critical systems were added to the AIID. A class action lawsuit was filed against Volkswagen over alleged defects in their automatic emergency braking systems, which reportedly led to accidents and injuries (Incident 746). Meanwhile, a fatal flaw in an algorithm used by the Spanish government to assess domestic violence risk scores, VioGén, resulted in preventable deaths, which demonstrates some of the potential life-and-death stakes of AI reliability (Incident 747).
These are just some of the notable cases we added in July. The full list of incidents provides a deeper look into the diverse and often troubling ways that AI systems might cause harm and how we might at the very least mitigate and hopefully prevent more harms from happening.
🗞️ New Incident IDs in the Database
- Incident 740, Department for Work and Pensions (DWP) AI Systems Allegedly Discriminate Against Single Mothers (7/10/2024)
- Incident 741, Robin Williams's Voice Deepfaked Without Consent (10/2/2023)
- Incident 742, Grok AI Model Reportedly Fails to Produce Reliable News in Wake of Trump Assassination Attempt (7/13/2024)
- Incident 743, Gemini AI Allegedly Reads Google Drive Files Without Explicit User Consent (7/16/2024)
- Incident 744, AI Work Assistants Require More Effort Than Expected, CIOs Say (6/25/2024)
- Incident 745, Figma Disables AI Feature After Accusations of Copying Apple’s Weather App (7/2/2024)
- Incident 746, Class Action Lawsuit Over Alleged Defects in Volkswagen's AI-Driven AEB Systems (5/15/2024)
- Incident 747, Fatalities Reportedly Occur Despite VioGén Algorithm's Low or Negligible Risk Scores (7/18/2024)
- Incident 748, Erroneous Declined Transaction Notification by PayPal AI Assistant (6/19/2024)
- Incident 749, Hoodline Accused of Misleadingly Attributing AI-Generated Articles to Human Authors (5/31/2024)
- Incident 750, AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News (7/22/2024)
- Incident 751, SearchGPT Reportedly Misleads Users with Incorrect Festival Dates in Demo (7/25/2024)
- Incident 752, AI-Generated Obituaries Are Reportedly Intensifying Grief for Bereaved Families (7/7/2024)
- Incident 753, BNN Breaking's AI-Driven Errors Reportedly Damage Reputations and Spread Misinformation (6/6/2024)
- Incident 754, British Female Politicians Victimized by Deepfake Pornography (7/1/2024)
- Incident 755, Deepfake Targets Olena Zelenska in Russian Disinformation Campaign (7/3/2024)
- Incident 756, Deepfake of Kamala Harris Saying Damaging Comments Circulates on X and Is Amplified by Elon Musk (7/26/2024)
- Incident 757, OpenAI's ChatGPT Mac App Stored User Data in Unencrypted Text Files (7/1/2024)
- Incident 758, Teen's Overdose Reportedly Linked to Meta's AI Systems Failing to Block Ads for Illegal Drugs (9/11/2023)
- Incident 759, AI-Generated Deepfakes Reportedly Derailed Political Career of Florida Official (2/5/2021)
👇 Diving Deeper
- Check out the Table View and List View for different ways to see and sort all incidents.
- Explore clusters of similar incidents in Spatial Visualization.
- Learn about alleged developers, deployers, and harmed parties in Entities Page.
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook.
- Submit incidents to the database.
- Contribute to the database’s functionality.