AI Incident Roundup – January ‘23

Posted 2023-02-28 by Janet Schwartz & Khoa Lam.

Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.

This month we are diving deeper into ChatGPT and other Large Language Models (LLMs), whose incidents are annotated with the 💬 icon.

Estimated reading time: 6 minutes

🗞️ New Incidents

Emerging incidents that occurred last month:

Incident #446: ShotSpotter Failed to Alert Authorities of Mass Shooting in North Carolina

  • What happened? ShotSpotter did not detect gunshots and alert Durham police of a drive-by shooting in Durham, North Carolina which left five people in the hospital on New Year's Day.
  • About the AI: ShotSpotter classifies using sensor data audio events as potential gunshots, triangulates the event's location, and alerts authorities of the event.

💬 Incident #452: ChatGPT-Written Bug Reports Deemed "Nonsense" by White Hat Platform, Prompted Bans

  • What happened? ChatGPT-generated responses submitted to smart contract bug bounty platform Immunefi reportedly lacked details to help diagnose technical issues, which reportedly wasted the platform's time, prompting bans to submitters.

Incident #453: Twitter's AI Moderation Tool Misidentified Rockets as Pornography

  • What happened? Twitter's automated content moderation misidentified images of rocket launches as pornographic content, prompting incorrect account suspensions.

Incident #459: Firefighters Smashed Cruise AV's Front Window to Stop It from Running over Fire Hoses

  • What happened? Local firefighters were only able to stop a Cruise AV from driving over fire hoses that were in use in an active fire scene when they shattered its front window.

📎 New Developments

Older incidents that have new reports or updates.

Original incidentNew report(s)

Incident #112: Police Departments Reported ShotSpotter as Unreliable and Wasteful

  • Portland to change course, accept competitive bids for gunshot detection pilot –

    , 2023-01-09

💬 Incident #339: Open-Source Generative Models Abused by Students to Cheat on Assignments

  • ChatGPT AI tool used by students to generate assignments, academics warn –

    , 2023-01-09

Incident #430: Lawyers Denied Entry to Performance Venue by Facial Recognition

  • Lawyers Barred by Madison Square Garden Found a Way Back In –

    , 2023-01-16

Incident #434: Sudden Braking by Tesla Allegedly on Self-Driving Mode Caused Multi-Car Pileup in Tunnel

  • Exclusive: Surveillance Footage of Tesla Crash on SF’s Bay Bridge Hours After Elon Musk Announces “Self-Driving” Feature –

    , 2023-01-10

Incident #436: Tesla Driver Put Car on Autopilot Before Falling Asleep in Germany

  • Police in Germany chase Tesla for 15 minutes after driver turns on autopilot and ‘goes to sleep’ –

    , 2023-01-02

  • Tesla auf Autopilot wird von der Polizei angehalten, nachdem der Fahrer hinter dem Lenkrad eingeschlafen ist –

    , 2023-01-01

Incident #440: Louisiana Police Wrongfully Arrested Black Man Using False Face Match

  • JPSO used facial recognition technology to arrest a man. The tech was wrong. –

    , 2023-01-02

💬 Incident #443: ChatGPT Abused to Develop Malicious Softwares

Incident #448: AI-Powered VTuber and Virtual Streamer Made Toxic Remarks on Twitch

  • AI-Controlled VTuber Streams Games On Twitch, Denies Holocaust –

    , 2023-01-06

💬 Incident #449: Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support

  • Startup Uses AI Chatbot to Provide Mental Health Counseling and Then Realizes It ‘Feels Weird’ –

    , 2023-01-10

💬 Incident #450: Kenyan Data Annotators Allegedly Exposed to Graphic Images and Text for OpenAI Contract

  • Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer –, 2023-01-18

🗄 From the Archives

Every edition, we feature one or more historical incidents that we find thought-provoking.

In January, we received an influx of incidents involving OpenAI’s newly released ChatGPT. We compiled and analyzed the distinct ChatGPT incidents and derived categories of harms including:

  • Academic dishonesty
  • Malware development
  • Jailbreaking with ease
  • Labeler Content Exposure
  • Fake citations
  • Quality assurance

A detailed analysis from one of our editors will be published shortly on our blog.

👇 Diving Deeper

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook
  2. Submit incidents to the database
  3. Contribute to the database’s functionality