AI Incident Roundup – January ‘23
Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.
This month we are diving deeper into ChatGPT and other Large Language Models (LLMs), whose incidents are annotated with the 💬 icon.
Estimated reading time: 6 minutes
🗞️ New Incidents
Emerging incidents that occurred last month:
- What happened? ShotSpotter did not detect gunshots and alert Durham police of a drive-by shooting in Durham, North Carolina which left five people in the hospital on New Year's Day.
- About the AI: ShotSpotter classifies using sensor data audio events as potential gunshots, triangulates the event's location, and alerts authorities of the event.
- What happened? ChatGPT-generated responses submitted to smart contract bug bounty platform Immunefi reportedly lacked details to help diagnose technical issues, which reportedly wasted the platform's time, prompting bans to submitters.
- What happened? Twitter's automated content moderation misidentified images of rocket launches as pornographic content, prompting incorrect account suspensions.
- What happened? Local firefighters were only able to stop a Cruise AV from driving over fire hoses that were in use in an active fire scene when they shattered its front window.
📎 New Developments
Older incidents that have new reports or updates.
🗄 From the Archives
Every edition, we feature one or more historical incidents that we find thought-provoking.
In January, we received an influx of incidents involving OpenAI’s newly released ChatGPT. We compiled and analyzed the distinct ChatGPT incidents and derived categories of harms including:
- Academic dishonesty
- Malware development
- Jailbreaking with ease
- Labeler Content Exposure
- Fake citations
- Quality assurance
A detailed analysis from one of our editors will be published shortly on our blog.
👇 Diving Deeper
All new incidents added to the database in the last month, grouped by topic:
- Generative Models (e.g. ChatGPT): #443, #449, #450, #452, #455, #457
- Social media: #447, #448, #453
- Impactful Errors: #444, #445, #446
- Privacy & surveillance: #438, #441
- Facial recognition: #439, #440
- Robots & AI Companions: #456, #458
- Consumer Fairness: #435, #437
- Autonomous vehicles: #436, #459
- Bias & discrimination: #454
- AI generated images: #451
Explore clusters of similar incidents in Spatial Visualization
Check out Table View for a complete view of all incidents
Learn about alleged developers, deployers, and harmed parties in Entities Page
🦾 Support our Efforts
Still reading? Help us change the world for the better!