AI Incident Roundup – March ‘24
Our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
🗄 Trending in the AIID
This month, the AI Incident Database (AIID) continues its mission of documenting AI incidents. Among the incidents recorded, we’ve observed a significant uptick in the misuse of deepfake technology, as seen in the broadcast by Russian State TV that falsely implicated Ukraine in the attack on Moscow (Incident 656), and in the unauthorized creation of a deepfake involving Kari Lake (Incident 658). We have also encountered instances where AI’s influence extends beyond misinformation, touching on personal safety and privacy. For example, the incident involving a Waymo robotaxi (Incident 647) highlights ongoing concerns in autonomous vehicle safety, while the report of Microsoft Copilot Designer generating inappropriate content (Incident 654) raises questions about the safeguards in AI content creation tools. The misuse of AI for creating misleading imagery ahead of the U.S. primary elections (Incident 650) is further heightening the need to be vigilant about the technology's potential to influence public opinion and the democratic process. See the list below for more incidents we've been tracking.
At AIID we’re also working on ways of improving our site navigation. Be sure to check out the Table View and List View for versatile approaches to viewing and sorting all documented incidents. These features allow for a tailored exploration of the database, accommodating different user preferences and research needs. Our Spatial Visualization offers the ability to explore clusters of similar incidents, providing a visual representation of the data that can reveal patterns and connections not immediately apparent through sorting and slicing methods. For a deeper dive into the background of these incidents, the Entities Page is an invaluable resource where you can learn about the reported developers, deployers, and parties harmed by these incidents. Each of these tools is designed with the aim of making the wealth of information in the AIID more accessible and understandable, supporting our mission to foster informed discussions, research, and prevention of AI incidents.
🗞️ New Incidents
Occurring in March
- Incident 656 (3/23/2024) Alleged Deepfake Disinformation Broadcast by Russian State TV Blames Ukraine for Moscow Attack
- Incident 658 (3/22/2024) The Arizona Agenda Produced a Deepfake of Kari Lake Advocating for the Publication Without Her Consent
- Incident 654 (3/06/2024) Microsoft Copilot Designer Reportedly Generated Inappropriate AI Images
- Incident 647 (3/06/2024) A Self-Driving Waymo Robotaxi Reportedly Collided with a Bicyclist
- Incident 650 (3/04/2024) AI-Generated Images of Trump with Black Voters Spread as Disinformation Before U.S. Primary Elections
Earlier Incidents, Newly Added
- Incident 655 (01/11/2024) Scams Reportedly Impersonating Wealthy Investors Proliferating on Facebook
- Incident 657 (01/03/2024) ChatGPT Account Compromise Leads to Unintended Data Exposure
- Incident 652 (12/06/2023) Two Florida Middle School Students Arrested Under New Law for Allegedly Having Made and Shared Deepfake Nudes of Their Classmates
- Incident 659 (10/07/2023) Mass Surveillance Facial Recognition Program Reportedly Targets Palestinians in Gaza
- Incident 653 (01/01/2019) Two Investment Firms Charged with Making False Claims of Artificial Intelligence Capabilities in Case of AI Washing
👇 Diving Deeper
- Check out the Table View and List View for different ways to see and sort all incidents.
- Explore clusters of similar incidents in Spatial Visualization.
- Learn about alleged developers, deployers, and harmed parties in Entities Page.
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook.
- Submit incidents to the database.
- Contribute to the database’s functionality.