AI Incident Roundup – May 2024

Posted 2024-06-07 by Daniel Atherton.

Our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.

🗄 Trending in the AIID

Throughout May, we were busy adding many new incident IDs. While keeping up with new and unfolding situations, we have also been working to process a backlog of discrete harm events that took place last year as well as earlier this year. The primary types of harm events we have been tracking the past month continue to be in the deepfakes and disinformation category, which includes global AI-driven disinformation, but there have also been incidents regarding inaccurate and harmful AI outputs, as well as privacy violations and unauthorized use, which includes misidentification and surveillance issues as well as criminal activities.

The two incidents that seemed to attract the most media attention in May were, respectively, the situation in which Scarlett Johansson alleged OpenAI had used her voice without permission for their "Sky" voice (incident 688) and Google's rollout of its AI-powered search engine confidently giving false and sometimes harmful advice (incident 693). However, some other AI incidents worth noting that did not receive as much coverage include ISIS sympathizers using deepfake news anchors to disseminate extremist propaganda via their News Harvest program (incident 690) and the REACH NET algorithm used by the U.S. Department of Veterans Affairs (VA) reportedly prioritizing men over women (incident 699). Also along the lines of algorithmic bias, while not a new incident ID, we have also added a couple reports, one from last year and one from April, on NarxCare's risk score model, whose algorithms are reported to contain significant bias concerns (incident 172).

We have been seeing increasing reporting on the integration of generative artificial intelligence tools with Russian disinformation campaigns. For example, most recently, a deepfake falsely depicted State Department Spokesman Matthew Miller suggesting that the Russian city of Belgorod could be attacked with U.S. weapons (incident 702). Additionally, from Russia, the American asylum seeker John Mark Dougan reportedly used AI tools and a fake news network to spread disinformation (incident 701).

See the full list below for all new additions.

🗞️ New Incident IDs in the Database

  • Incident 702 (5/31/2024) Disinformation Deepfake Circulates of State Department Spokesman Matthew Miller Suggesting Belgorod Can Be Attacked with U.S. Weapons
  • Incident 701 (5/29/2024) American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
  • Incident 700 (5/20/2024) Meta's AI Chatbots Are Entering Online Support Communities Uninvited
  • Incident 699 (5/23/2024) VA Suicide Prevention Algorithm REACH VET Reportedly Prioritizes Men Over Women Veterans
  • Incident 698 (9/2/2023) Deepfake Video of Ron DeSantis Dropping Out of 2024 Presidential Race Circulating
  • Incident 697 (6/23/2023) Deepfake Image Circulating of Donald Trump with Underage Girl at Jeffrey Epstein's Private Island
  • Incident 696 (2/14/2024) Meta's AI Ad Platform Reportedly Causes Overspending and Poor Performance
  • Incident 695 (5/24/2023) Donald Trump's Presidential Campaign Released Deepfakes Attacking Ron DeSantis
  • Incident 694 (4/25/2023) Republican AI Ad Depicts Dystopian Future After Biden Reelection Announcement
  • Incident 693 (5/14/2024) Google AI Reportedly Delivering Confidently Incorrect and Harmful Information
  • Incident 692 (2/1/2024) London Metropolitan Police's Facial Recognition Technology Reportedly Misidentified Shaun Thompson as Suspect Leading to Arrest
  • Incident 691 (5/25/2024) Facewatch Reported to Have Wrongfully Flagged Home Bargains Customer as Shoplifter
  • Incident 690 (3/26/2024) ISIS Utilizes AI for Propaganda Videos in News Harvest Program
  • Incident 689 (3/26/2024) Holmen, Wisconsin Man Allegedly Used Stable Diffusion to Create and Then Share Sexually Explicit Images Depicting Minors
  • Incident 688 (5/20/2024) Scarlett Johansson Alleges OpenAI's Sky Imitates Her Voice Without Licensing
  • Incident 687 (4/8/2024) Deepfake Porn Sites Use Breeze Liu's Image Without Consent
  • Incident 686 (4/3/2024) Meta AI Image Generator Reportedly Fails to Accurately Represent Interracial Relationships
  • Incident 685 (4/24/2024) The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information
  • Incident 684 (4/4/2024) Google Books Appears to Be Indexing Works Written by AI
  • Incident 683 (3/28/2024) Scammers Using Deepfakes of Women's Faces and Voices for False and Offensive Advertisements
  • Incident 682 (2/1/2024) GOP Pollster Shares AI-Generated Images to Fabricate Appearance of Black Voter Support
  • Incident 681 (7/17/2023) Never Back Down Super PAC for Ron DeSantis Uses AI Donald Trump Voice in Attack Ad Against Kim Reynolds
  • Incident 680 (3/1/2024) Russia-Linked AI CopyCop Site Identified as Modifying and Producing at Least 19,000 Deceptive Reports
  • Incident 679 (2/20/2023) A Deepfake of Senator Elizabeth Warren Circulated Saying Republicans Should Not Vote
  • Incident 678 (4/29/2024) ChatGPT Factual Errors Lead to Filing of Complaint of GDPR Privacy Violation
  • Incident 677 (4/29/2024) ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

👇 Diving Deeper

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook.
  2. Submit incidents to the database.
  3. Contribute to the database’s functionality.