AI Incident Roundup – October and November 2024
Our review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
🗄 Trending in the AIID
Between October and November, we were tracking a range of AI incidents that showcase both the growing complexity of these technologies and the scale of their misuse or malfunction. Some of these cases are recent, while others date back months or even years. Here are some of the incidents we have been studying.
The flaws in AI-powered tools often come to light in sensitive and high-stakes environments. Earlier this fall, a transcription tool reportedly fabricated content in medical records, raising alarms about its reliability in healthcare (Incident 827). Similarly, in child protection services, ChatGPT was alleged to have introduced errors into a court report, which led to significant fallout (Incident 807). In another case, Nevada’s school system faced setbacks when an AI risk model allocated resources away from the students who needed them most (Incident 808). These failures are a representative snapshot of some of the wider ongoing challenges, and potential harms, of integrating AI into decision-making processes that affect vulnerable populations.
The proliferation and dissemination of deepfakes continues to challenge public trust, with incidents of AI-generated disinformation continuing to emerge across the globe. AI-generated media were reportedly used in a broader Russian disinformation effort to undermine the EU referendum elections in Moldova (Incident 840), including a deepfake targeting Moldova's economic development minister (Incident 841). These incidents were linked to Kremlin-backed actors. In the U.S., the Russian disinformation campaign Storm-1516 was connected to a fabricated video falsely accusing Vice-Presidential candidate Tim Walz, using the identity of a former student to lend credibility to the claims (Incident 824). U.S. intelligence has since identified this and other incidents as part of a coordinated effort to disrupt the 2024 elections. Beyond these geopolitical events, we’ve also been tracking the widespread use of AI to manipulate narratives on platforms like TikTok. Reports of AI-generated Nazi propaganda gaining millions of views (Incidents 809 and 810) are further examples of the ways in which social media platforms can amplify harmful content. And over the past several months, in China the authorities have been cracking down on a wave of AI-driven scams and fraudulent content (Incident 834). We are also working to include more reports from non-English sources to deepen our understanding of how the deployment of AI systems worldwide may be causing harms that are often overlooked or underreported. The integration of AI into disinformation campaigns is a key area of concern. The deliberate targeting of democratic processes and institutions undermines their integrity, weakens public trust, and leaves societies more vulnerable to manipulation and division.
While the measurable harm of disinformation campaigns presents more challenges in defining and quantifying, the malicious use of AI for personal ends has clearer harms. In Argentina, a student was accused of using AI to generate explicit images of classmates (Incident 848). Similarly, deepfake nudes targeting students at a Canadian school in December drew attention to the necessity for much stronger protections (Incident 812). In October, Character.AI was involved in two incidents where deceased individuals' likenesses were replicated without consent to create chatbots (Incidents 814 and 850). These incidents brought the platform back into the spotlight, following the February case tied to Incident 826.
Corporations and governments alike are continuing to grapple with the legal and ethical implications of AI, such as a news site in California erroneously implicating a district attorney in a murder case, which was another instance of AI-generated misinformation going awry (Incident 825). Meanwhile, facial recognition systems continue to misfire, leading to wrongful arrests in multiple cases (Incidents 815 and 816). There remains a troubling gap between the promises of AI and the reality of its deployment.
Older incidents also shed light on longstanding issues with AI. For example, the 2019 case of wrongful imprisonment due to facial recognition in New Jersey (Incident 816) is a reminder of how long these problems have persisted. Similarly, a 2022 lawsuit alleging housing discrimination by an AI screening tool (Incident 844) shows that biases baked into algorithms can have real-world consequences that echo for years.
Below you will find the full list of new additions.
🗞️ New Incident IDs in the Database
- Incident 807: ChatGPT Introduces Errors in Critical Child Protection Court Report (2024-09-05)
- Incident 808: Infinite Campus AI-Driven Student Risk Model Leads to Cuts in Support for Nevada's Low-Income Schools (2024-10-11)
- Incident 809: TikTok Hosts AI-Generated English-Language Hitler Speeches with Millions of Views (2024-04-07)
- Incident 810: TikTok Network Amplifies AI-Generated Nazi Propaganda and Hate Speech (2024-07-29)
- Incident 811: AI-Powered Transcription Services Allegedly Leak Confidential Workplace Discussions (2024-10-02)
- Incident 812: Deepfake Nudes Targeting Underage Female Students at Collège Béliveau in Winnipeg Shared Online (2023-12-11)
- Incident 813: Starship Technologies Delivery Robot Injures Arizona State University Employee (2024-09-19)
- Incident 814: AI Avatar of Murder Victim Created Without Consent on Character.AI Platform (2024-10-02)
- Incident 815: Police Use of Facial Recognition Software Causes Wrongful Arrests Without Defendant Knowledge (2024-10-06)
- Incident 816: Cross-Jurisdictional Facial Recognition Misidentification by NYPD Leads to Wrongful Arrest and Four-Year Jail Time in New Jersey (2019-11-29)
- Incident 817: AI-Generated Images Spread Misinformation During Hurricane Helene Response (2024-09-24)
- Incident 818: Jennifer Aniston’s Likeness Exploited in Deepfake Collagen Supplement Promotion (2024-09-28)
- Incident 819: ProKYC Tool Allegedly Facilitates Deepfake-Based Account Fraud on Cryptocurrency Exchanges (2024-10-09)
- Incident 820: Alleged AI-Generated Photo Alteration Leads to Inappropriate Modifications in Speaker's Conference Picture (2024-10-15)
- Incident 821: Baidu Robotaxi Allegedly Involved in Collision with Pedestrian in Wuhan (2024-07-07)
- Incident 822: Algorithmic Bias in French Welfare System Allegedly Discriminates Against Marginalized Groups (2024-10-15)
- Incident 823: Cybercheck Tool Allegedly Provides Questionable Evidence in Murder Trials (2024-05-03)
- Incident 824: Fake Video Allegedly Misattributes False Claims to Former Student in Attack on Vice-Presidential Candidate Tim Walz (2024-10-16)
- Incident 825: AI News Site Hoodline San Jose Erroneously Misidentifies San Mateo District Attorney as Murder Suspect (2024-10-08)
- Incident 826: Character.AI Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails (2024-02-28)
- Incident 827: AI Transcription Tool Whisper Reportedly Inserting Fabricated Content in Medical Transcripts (2024-10-26)
- Incident 828: Uruguayan TV Program Santo y Seña Uses a Deepfake of Political Candidate Yamandú Orsi Without His Consent (2024-10-13)
- Incident 829: Facial Recognition System in Buenos Aires Triggers Police Checks Based on False Matches (2024-02-05)
- Incident 830: Error-Prone AI Accessibility Tools Reportedly Lead to Navigation Issues for Blind Internet Users (2024-04-07)
- Incident 831: NYC Subway AI Weapons Scanners Yield High False Positive Rate and Detect No Guns in Month-Long Pilot Test (2024-10-23)
- Incident 832: Viral AI-Generated Song about "Diddy Party" Mimics Justin Bieber (2024-04-06)
- Incident 833: Polish Radio Station Replaces Human Hosts with AI-Generated Presenters to Simulate Interviewing Deceased Poet Wisława Szymborska (2024-10-21)
- Incident 834: China Targets AI-Driven Fraud and Deepfake Scandals with New Crackdowns (2024-07-04)
- Incident 835: AI Technology Allegedly Fuels False Reports of Natural Disasters and Accidents in China (2024-08-05)
- Incident 836: Sichuan Province Beset by Numerous Fabricated AI-Generated Reports of Disasters and Crises (2024-07-22)
- Incident 837: Fake CNN Broadcast Allegedly Used to Spread False Texas Election Results (2024-11-02)
- Incident 838: Microsoft Copilot Allegedly Provides Unsafe Medical Advice with High Risk of Severe Harm (2024-04-25)
- Incident 839: AI-Driven Phishing Scam Uses Spoofed Google Call to Attempt Gmail Breach (2024-10-07)
- Incident 840: AI-Generated Media Reportedly Used in Russian Disinformation Campaign in Moldova (2024-09-18)
- Incident 841: Fake Video Allegedly Targets Moldovan Economic Development Minister Dumitru Alaiba in Election Disinformation Campaign (2024-10-01)
- Incident 842: Reportedly Hacked AI-Powered Robot Vacuums Allegedly Used for Surveillance and Harassment (2024-05-24)
- Incident 843: Generative AI Plagiarism Incident at Hingham High School Reportedly Tied to Inaccurate Citation Outputs from Grammarly AI (2024-11-20)
- Incident 844: SafeRent AI Screening Tool Allegedly Discriminated Against Housing Voucher Applicants (2022-05-25)
- Incident 845: Google's Gemini Allegedly Generates Threatening Response in Routine Query (2024-11-13)
- Incident 846: Social Media Algorithms Amplified Disinformation Campaign in Honduras Election (2021-10-06)
- Incident 847: Brake Failure in AI-Driven Tram Leads to Multiple Injuries in Saint Petersburg, Russia (2024-04-14)
- Incident 848: High School Student in Córdoba, Argentina Accused of Using AI to Generate Explicit Images of Classmates (2024-10-17)
- Incident 849: AI Detection Tools Allegedly Misidentify Neurodivergent and ESL Students' Work as AI-Generated in Academic Settings (2024-10-18)
- Incident 850: Character.AI Chatbots Allegedly Misrepresent George Floyd on User-Generated Platform (2024-10-24)
- Incident 851: Salt Lake City Police Chief Mike Brown's Voice and Image Misused in AI-Generated Scam (2024-10-25)
- Incident 852: Alleged Fake Citations Undermine Expert Testimony in Minnesota Deepfake Law Case (2024-11-01)
- Incident 853: Two Passengers Report Feeling Trapped in Waymo Car During Sensor Obstruction (2024-02-03)
- Incident 854: Waymo Driverless Taxi Allegedly Stalled During Pedestrian Harassment Incident in San Francisco (2024-09-30)
- Incident 855: Names Linked to Defamation Lawsuits Reportedly Spur Filtering Errors in ChatGPT's Name Recognition (2024-11-30)
👇 Diving Deeper
- Check out the Table View and List View for different ways to see and sort all incidents.
- Explore clusters of similar incidents in Spatial Visualization.
- Learn about alleged developers, deployers, and harmed parties in Entities Page.
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook.
- Submit incidents to the database.
- Contribute to the database’s functionality.