AI Incident Roundup – August and September 2024
Our review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
đź—„ Trending in the AIID
August and September 2024 saw the addition of 46 new incident IDs. Some recent inclusions are a few years old, while we saw a significant rise in non-consensual deepfake, political misinformation, and fraud-related incidents in unfolding news cycles. As usual, these IDs represent a snapshot of a fraction of all actual AI incidents that have occurred, tracking closely with trends in journalism. Dates are specific where possible, but are often approximate and subject to discussion and further research. Below are some examples of AI harm events that we have been tracking.
Political Disinformation and AI’s Role in Public Opinion Manipulation
Although it took place in July, the first political disinformation incident of the August-September roundup was Incident 760 (July 21, 2024), which pertains to false election data related to U.S. Vice President Kamala Harris that circulated via the Grok AI Chatbot. Shortly after, Incident 761 (August 8, 2024) revealed that TikTok’s AI system was reportedly used by foreign networks to amplify disinformation during the election, pointing to the platform's vulnerability to be misused in politically charged environments, but also the complicated, collective nature of incident reporting on a social media platform as large as TikTok. Incident 766 (August 18, 2024) saw former President Donald Trump share AI-generated images that falsely suggested pop star Taylor Swift had endorsed him. On the same day, Incident 767 involved AI-generated images of Kamala Harris at the Democratic National Convention (DNC) with communist flags in the background. In early September, Incident 775 (September 2, 2024) detailed how Elon Musk reportedly shared an AI-generated image depicting Kamala Harris dressed as a communist ruler, again reinforcing the trend of high-profile individuals inadvertently amplifying misleading AI-generated content. This pattern of political manipulation appeared again, albeit in a different form, in Incident 805 (September 19, 2024), where Senator Ben Cardin was targeted in a deepfake Zoom video call by someone posing as the former Ukrainian foreign minister Dmytro Kuleba. While Sen. Cardin is reported to have ended the call after feeling like something wasn’t right, it demonstrated the geopolitical implications of AI-generated content, as deepfakes increasingly become deployed to manipulate international relations as part of diplomatic subterfuge and for intelligence operations.
Deepfakes and Non-Consensual Media
Some now-earlier examples of deepfake harms must be included here, and were incorporated into the database as part of the process of returning to incidents before the AIID began. Incident 769 (April 20, 2018) pertains to the investigative journalist Rana Ayyub, who was the victim of an AI-generated deepfake pornography attack. This case was one of the earliest examples of how deepfakes can be maliciously wielded to discredit public figures and professionals. Another notable early case was Incident 771 (February 6, 2020), where Noelle Martin, an Australian activist, was targeted with non-consensual deepfake pornography. Martin’s experience with AI-manipulated media sparked public discourse on the use of deepfakes to exploit and harass individuals, especially women. Her advocacy following the incident brought attention to the growing misuse of AI in creating intimate, harmful content without consent, and she became a key figure in pushing for stronger legal protections against deepfake abuse. Similarly, in Incident 772 (June 8, 2020), actress Kristen Bell was the subject of a deepfake attack, with non-consensual AI-generated pornography having been circulated online using her likeness. Bell's case reflected how high-profile public figures are increasingly becoming targets of this technology, which was one of the major stories in raising the visibility of this issue.
This year, Incident 765 (March 14, 2024) tracks stories related to a harm event in which at least 22 students at Richmond-Burton Community High School in Illinois were targeted by deepfake nudes. This case was another example of the impact of deepfakes on minors and the need for stronger protective measures in educational settings. Incident 784 (April 23, 2024) is also tracking instances in which child predators are reportedly generating deepfake nudes of minors in order to extort them. The rise of sextortion cases converging with deepfake incidents is something that must continue to be assiduously monitored and reported. Similarly, in a specific case, the abuse of AI for deepfake pornography is Incident 780 (August 23, 2024), where a soldier from Joint Base Elmendorf-Richardson was accused of using AI to generate child pornography. Toward the end of August, Incident 777 (August 28, 2024) tracks a surge in explicit deepfake pornography in South Korea.
A couple of other deepfake incidents involved the misuse of celebrities' identities, voices, and images. For example, Incident 792 (April 19, 2024) exposed the unauthorized use of AI to replicate the voices of Tupac Shakur and Snoop Dogg in Drake's “Taylor Made Freestyle,” which was viewed as an infringement of intellectual property rights and the misuse of AI to replicate iconic voices without consent. Additionally, AI deepfake scams also targeted public figures, as seen in Incident 802 (September 13, 2024), where a deepfake of Brian May was used in a scam offering fake Queen backstage tickets. This case was another convergence point within which deepfakes didn’t just harm public figures by misappropriating their likenesses, but they were also used deliberately for fraudulent gain.
AI Failures in Corporate and Legal Settings
In addition to political and personal misuse, AI systems have also faced significant failures in corporate and legal settings. One of the earliest major incidents in this category was Incident 768 (March 11, 2023), where ChatGPT was implicated in a Samsung data leak involving sensitive source codes and meeting notes. Earlier this summer, Incident 796 (June 1, 2024) involved Facebook’s content moderation system mistakenly flagging emergency updates during wildfires as spam, which led to users being prevented from receiving critical updates. Later, Incident 791 (September 9, 2024) revealed a Google AI error that prompted parents to use fecal matter for child training exercises. When not properly safeguarded, AI systems can produce highly inappropriate and misleading recommendations because they can’t always ascertain context. Incident 785 (September 8, 2024) saw ESPN’s AI coverage overlook Alex Morgan’s final match—in this case, AI-generated content failed to understand and capture the actually significant moment. In the legal domain, Incident 781 (September 3, 2024) saw Clearview AI fined $33.7 million for violating GDPR by harvesting biometric data. Corporate AI failures continued to surface in educational settings, as shown in Incident 793 (July 1, 2024), where a failed AI tool cost the Los Angeles Unified School District $6 million in taxpayer money.
AI-Driven Scams and Fraud
Scams and fraud using AI are, of course, an ongoing problem. Earlier this year, Incident 787 (March 1, 2024) involved the relaunch of OnlyFake, a site offering AI-generated fake IDs. Despite having been taken down earlier, OnlyFake reemerged with new fraud tools, including the ability to generate handwritten signatures, making it even more challenging for biometric verification systems to detect. Researchers from Au10tix discovered that these AI-generated IDs were being used to facilitate various forms of fraud.
In addition to Incident 802, the one involving Brian May, in Incident 795 (August 14, 2024), deepfake videos misusing Elon Musk’s image and voice have reportedly contributed to billions in fraud. This particular incident is also an example of a long-running problem collectively made up of many singular incidents. Similarly, the broader impact of AI-powered scams was further demonstrated in Incident 800 (September 3, 2024), where it was reported that over 50% of U.S. and U.K. businesses experienced attacks involving AI-powered deepfake scams in 2024. Cybercrime involving AI and its increasing use to deceive both consumers and businesses is a trend that will only worsen, and the means of tracking and documenting these incidents will have to continue evolving as well.Â
🗞️ New Incident IDs in the Database
- Incident 760: False Election Data on Kamala Harris Reportedly Circulated via Grok AI Chatbot (7/21/2024)
- Incident 761: TikTok AI System Used to Amplify Election Disinformation by Foreign Networks (8/8/2024)
- Incident 762: Grok AI Reportedly Generates Offensive and Violent Images Without Proper Safeguards (8/14/2024)
- Incident 763: Grok AI Chatbot Reportedly Spreads Unfounded Rumors About Trump’s Dentures (8/13/2024)
- Incident 764: Cody Enterprise Reporter Resigns After Admitting to AI-Generated Fake Quotes (6/26/2024)
- Incident 765: 22 Students at Richmond-Burton Community High School in Illinois Targeted by Deepfake Nudes (3/14/2024)
- Incident 766: Trump Shares AI-Generated Images Falsely Suggesting Taylor Swift Endorsement (8/18/2024)
- Incident 767: AI Image of Kamala Harris at DNC with Communist Flags Circulated by Trump (8/18/2024)
- Incident 768: ChatGPT Implicated in Samsung Data Leak of Source Code and Meeting Notes (3/11/2023)
- Incident 769: Investigative Journalist Rana Ayyub Targeted by AI-Generated Deepfake Pornography (4/20/2018)
- Incident 770: Microsoft Copilot Falsely Accuses Journalist Martin Bernklau of Crimes (8/16/2024)
- Incident 771: Noelle Martin Deepfaked Without Consent in AI-Generated Pornography (2/6/2020)
- Incident 772: Kristen Bell Deepfaked in Non-Consensual AI-Generated Pornography (6/8/2020)
- Incident 773: Chatbot in Workplace Training at Bunbury Prison Reveals Real Names in Sexual Harassment Case (8/20/2024)
- Incident 774: Covert AI Influence Operations Linked to Russia, China, Iran, and Israel, OpenAI Reveals (5/30/2024)
- Incident 775: Elon Musk Reportedly Shared an AI-Generated Image Depicting Kamala Harris Dressed as a Communist Ruler (9/2/2024)
- Incident 776: Megalopolis Trailer Included Fake AI-Generated Quotes Attributed to Film Critics (8/21/2024)
- Incident 777: South Korea Experiences a Surge of Explicit Deepfake Pornography (8/28/2024)
- Incident 778: Amazon's Alexa Reportedly Shows Political Preference Error in Trump-Harris Presidential Race Queries (9/4/2024)
- Incident 779: Music Producer Arrested for Allegedly Using AI-Generated Songs in $10 Million Streaming Scam (9/4/2024)
- Incident 780: Joint Base Elmendorf-Richardson Soldier Faces Allegations of Using AI to Generate Child Pornography (8/23/2024)
- Incident 781: Clearview AI Faces $33.7 Million Fine for Violating GDPR with Biometric Data Harvesting (9/3/2024)
- Incident 782: AI 'Nudify' Apps Used as Tools for Blackmail and Extortion (9/9/2024)
- Incident 783: WiseTech Global CEO Richard White Reportedly Deepfaked in Multiple Attempts to Scam Staffers (5/21/2024)
- Incident 784: Child Predators Are Reportedly Generating Deepfake Nudes of Children to Extort Them (4/23/2024)
- Incident 785: ESPN's AI Coverage Overlooks Alex Morgan in Her Final Match Recap (9/8/2024)
- Incident 786: Fraudsters Use Deepfake Video of Tim Cook to Promote Apple Crypto Scam on YouTube (9/7/2022)
- Incident 787: Deepfake ID Sales Persist as OnlyFake Relaunches with New Fraud Tools (3/1/2024)
- Incident 788: Instagram's Algorithm Reportedly Recommended Sexual Content to Teenagers' Accounts (6/20/2024)
- Incident 789: Independent News Sites Flagged as Spam by Facebook's AI Moderation System (6/12/2024)
- Incident 790: Unrestricted AI Avatar Tool Accidentally Released by TikTok Permits Recitation of Hitler Quotes and Other Harmful Speech (6/21/2024)
- Incident 791: Google AI Error Prompts Parents to Use Fecal Matter in Child Training Exercise (9/9/2024)
- Incident 792: Unauthorized Use of AI to Replicate Tupac Shakur's and Snoop Dogg's Voices in Drake's 'Taylor Made Freestyle' (4/19/2024)
- Incident 793: AllHere's Chatbot 'Ed' Fails and Costs Los Angeles Unified School District $6 Million (7/1/2024)
- Incident 794: Glitch in Waymo Self-Driving Cars Triggers Regular All-Night Honking in San Francisco (8/13/2024)
- Incident 795: Deepfake Elon Musk Videos Have Reportedly Contributed to Billions in Fraud (8/14/2024)
- Incident 796: Facebook's Content Moderation System Flagged and Removed Emergency Updates as Spam During Wildfires (6/1/2024)
- Incident 797: Teenager Makes Deepfake Pornography of 50 Girls at Bacchus Marsh Grammar School in Australia (6/7/2024)
- Incident 798: Australian Schools Grappling with Significant Spread of Non-Consensual Spread of Deepfake Pornography of Students (6/29/2024)
- Incident 799: Aledo High School Student Generates and Distributes Deepfake Nudes of Seven Female Classmates (6/18/2024)
- Incident 800: 53% of American and British Businesses Report Attacks by AI-Powered Deepfake Scams in 2024 (9/3/2024)
- Incident 801: Bias in AI Deepfake Detection Undermines Election Security in Global South (9/2/2024)
- Incident 802: AI Deepfake of Brian May Exploited in Scam Offering Fake Queen Backstage Tickets (9/13/2024)
- Incident 803: Facebook's Algorithm Reportedly Amplifies AI-Generated Content, Fueling Misleading Posts (5/14/2024)
- Incident 804: AI-Generated Fake 'True Crime' Video About Non-Existent Littleton Murder Goes Viral (7/30/2024)
- Incident 805: Senator Ben Cardin Targeted with Deepfake Zoom Video Call Posing as Former Ukrainian Foreign Minister Dmytro Kuleba (9/19/2024)
- Incident 806: Criminal Group Uses AI Deepfake Technology to Steal Personal Data in Hangzhou, Zhejiang (9/13/2024)
👇 Diving Deeper
- Check out the Table View and List View for different ways to see and sort all incidents.
- Explore clusters of similar incidents in Spatial Visualization.
- Learn about alleged developers, deployers, and harmed parties in Entities Page.
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook.
- Submit incidents to the database.
- Contribute to the database’s functionality.