AI Incident Roundup – February ‘23
Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.
As with last month, there are many incidents related to Large Language Models (LLMs), whose incidents are annotated with the 💬 icon.
Estimated reading time: 7 minutes
🗞️ New Incidents
Emerging incidents that occurred last month:
- What happened? The AI-generated sitcom broadcast as the Twitch livestream "Nothing, Forever" received a temporary ban for featuring a transphobic and homophobic dialogue segment intended as comedy.
💬 Incident #467: Google's Bard Shared Factually Inaccurate Info in Promo Video
- What happened? Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.
- What happened? Microsoft's ChatGPT-powered Bing search engine reportedly ran into factual accuracy problems when prompted about controversial matters, such as inventing the plot of a non-existent movie or creating conspiracy theories.
💬 Incident #470: Bing Chat Response Cited ChatGPT Disinformation Example
- What happened? Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.
- What happened? Early testers of Bing Chat successfully used prompt injection to reveal its built-in initial instructions, which contains a list of statements governing ChatGPT's interaction with users.
- What happened? Replika paid-subscription users reported unusual and sudden changes to behaviors of their "AI companions" such as forgetting memories with users or rejecting their sexual advances, which affected their connections and mental health.
- What happened? Early testers reported Bing Chat, in extended conversations with users, having tendencies to make up facts and emulate emotions through an unintended persona.
- What happened? A deepfaked audio of US President Joe Biden making transphobic remarks played on top of a video showing him giving a speech was released on Instagram and circulated on social media.
- What happened? A deepfake video featuring podcast host Joe Rogan advertising to his listeners about a "libido-boosting" supplement was circulating on TikTok and other platforms before being removed by TikTok along with the account which posted it.
- What happened? Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing the student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".
- What happened? A resident in Medak, India died allegedly due to custodial torture by the local police, who misidentified him as a suspect in a theft case using facial recognition.
- What happened? A UK journalist was able to successfully bypass Lloyds Bank's "Voice ID" program to access his bank account using an AI-generated audio of his own voice.