AI Incident Roundup – April ’24
Our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
🗄 Trending in the AIID
This month, the AI Incident Database has documented a concerning rise in the sophisticated and widespread use of deepfake technology. Among the notable recent incidents is the creation of deepfake audio that falsely depicted Philippines President Ferdinand Marcos Jr. ordering military actions. This trend of political manipulation through AI was seen across several countries over the past handful of months, with reports of deepfakes influencing elections in South Korea, Pakistan, India, Taiwan, and elsewhere. The effect that all of these deepfakes is having is that they are distorting democratic processes and eroding trust in electoral integrity. People misusing deepfakes has extended to reviving images and voices of deceased politicians in election campaigns in India and Indonesia, which is both blurring a lot of ethical lines in principle ("resurrecting" long-dead political leaders to make statements in the present) while manipulating public sentiment.
Increasing numbers of reports of deepfakes being used to slander political figures and fabricate inappropriate content, affecting both public figures and celebrities worldwide, are also being chronicled in stories that are lumping their proliferation into collective reports more so than just discrete events. Journalists, while still tracking those discrete events, are beginning to create reports tracking multiple incidents in one story in order to detail the widespread deployment of deepfakes in elections.
Professional, safety, and security concerns have also been prominent, such as the recent reporting on a Baltimore high school athletic director making deepfakes of the school principal making racist and antisemitic comments (an incident from January that only started emerging in the news in April), a facial recognition system in New Zealand that misidentified a Māori shopper as a thief, and the reported use of AI in directing military actions in Gaza. The third incident in that list allegedly led to high civilian casualty rates as human-in-the-loop practices were reported to be operating more as rubber stamps for AI decision-making, which seemed to be based on questionable training data.
🗞️ New Incident IDs in the Database
- Incident 676 (4/24/2024) Deepfake Audio Falsely Depicts Philippines President Ferdinand Marcos Jr. Ordering Military Action
- Incident 675 (1/15/2024) Racist and Antisemitic Deepfake Audio Mimicking School Principal Fabricated by Athletic Director
- Incident 674 (3/14/2024) Manipulated Media via AI Disinformation and Deepfakes in 2024 Elections Erode Trust Across More Than 50 Countries
- Incident 673 (2/19/2024) Deepfakes Circulating and Eroding Electoral Integrity in the Lead-up to 2024 South Korean legislative election
- Incident 672 (4/3/2024) Lavender AI System Reportedly Directs Gaza Strikes with High Civilian Casualty Rate
- Incident 671 (2/8/2024) Many Political Deepfakes Circulating in Run-up to 2024 Pakistani General Elections
- Incident 670 (1/23/2024) Deepfakes of Deceased Indian Politicians for Election Campaigning Are Increasingly Being Deployed
- Incident 669 (2/11/2024) Deepfake of Long-Deceased Suharto Circulating in Run-up to February 2024 Indonesian Elections
- Incident 668 (12/27/2023) Proliferation of Deepfakes Disrupting 2024 Lok Sabha Elections
- Incident 667 (12/16/2023) Manipulated Deepfake Video of Lai Ching-te Endorsing Rivals in Lead-up to January Presidential Elections
- Incident 666 (12/29/2023) Presidency of Moldova Refutes Deepfake Video Slandering President Maia Sandu
- Incident 665 (4/2/2024) Facial Recognition Misidentification at New World Westend in New Zealand
- Incident 664 (2/17/2024) Deepfake Generated by the Lincoln Project of Trump's Father Used in Political Attack Ad
- Incident 663 (4/5/2024) China Reportedly Intensifying AI to Spread Disinformation to U.S. and Taiwanese Voters
- Incident 662 (4/2/2024) Washington State's Lottery AI Site Reportedly Generates Inappropriate User Image
- Incident 661 (3/26/2024) Leonardo AI's Platform Alleged to Have Been Used for Creating Nonconsensual Celebrity Deepfakes
- Incident 660 (3/21/2024) Investigation Reports Unauthorized Deepfake Pornography Harms Thousands of Celebrities
👇 Diving Deeper
- Check out the Table View and List View for different ways to see and sort all incidents.
- Explore clusters of similar incidents in Spatial Visualization.
- Learn about alleged developers, deployers, and harmed parties in Entities Page.
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook.
- Submit incidents to the database.
- Contribute to the database’s functionality.