AIID Blog

AI Incident Roundup – May & June ‘23

Posted 2023-07-18 by Janet Schwartz & Khoa Lam.

Read our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.


🗄 Trending in the AIID

With the increasing proliferation of generative AI tools, we are seeing a continuing trend of factual errors and deep fakes positioned as misinformation. This spring, political deep fakes have been cropping up in presidential elections and campaigns. Meanwhile, chatbot outputs reportedly produced non-existent case law and accused photographers of war crimes

Incident 545 highlights another data type we are tracking in the database: AI “issues” that presage AI incidents. In this case, Vice reported in late May that the National Eating Disorders Association (NEDA) decided to replace eating disorder helpline staff workers with a chatbot named Tessa. Only a few days later Vice, and many other media outlets, reported that NEDA disabled their helpline for reportedly “harmful” responses. This example shows how the AIID captures risks as they transition from incident-in-waiting to realized harm.

 
🗞️ New Incidents


Chatbots

  • #541: ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
  • #545: Chatbot Tessa gives unauthorized diet advice to users seeking help for eating disorders
  • #548: Opera's GPT-Based AI Reportedly Accused War Photographers of War Crimes
  • #539: Snapchat's My AI Reported for Lacking Protection for Children
  • #549: Fast Food Chains' AI Chatbots Failed to Assist Job Applicants with Scheduling Interviews
  • #552: Bing Chat Solved CAPTCHAs with Image Analysis Feature Despite Safeguards

Deepfakes

  • #543: Deepfake of Explosion Near US Military Administration Building Reportedly Causes Stock Dip
  • #544: Deepfakes and AI-generated disinformation in the 2023 presidential elections of Turkey
  • #547: Ron DeSantis's Campaign Released Twitter Video Containing AI Images to Smear Donald Trump
  • #551: FBI Reported Surge of Extortion Cases of AI Media Featuring Sexual Explicit Activities

Autonomous vehicles

  • #533: Tesla FSD Misidentified Truck Hauling Traffic Lights as Trail of Traffic Lights
  • #550: Tesla Allegedly on Autopilot Struck High School Student Exiting School Bus
  • #540: Tesla Failed to Yield to Detected Pedestrian on Crosswalk, Reportedly Violated Traffic Law

Bias

  • #535: COVID-19 Detection and Prognostication Models Allegedly Flagged for Methodological Flaws and Underlying Biases
  • #546: Algorithm to Distribute Social Welfare Reported for Oversimplifying Economic Vulnerability

Other

  • #536: NJ Transit's Use of Modeling Software Miscalculated Storm Surge Threat Level
  • #553: Google's Overview Panel for Artist Edward Hopper Featured Image Generated in His Style by AI
  • #554: Google Results for Johannes Vermeer Featured AI Version of His Artwork as Top Result


👇 Diving Deeper


🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook
  2. Submit incidents to the database
  3. Contribute to the database’s functionality