AIID Blog

AI Incident Roundup – February ‘23

Posted 2023-03-20 by Janet Schwartz & Khoa Lam.

Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.

As with last month, there are many incidents related to Large Language Models (LLMs), whose incidents are annotated with the 💬 icon.

Estimated reading time: 7 minutes

🗞️ New Incidents

Emerging incidents that occurred last month:

💬 Incident #462: AI-Produced Livestream Sitcom Received Temporary Twitch Ban for Transphobic Segment

  • What happened? The AI-generated sitcom broadcast as the Twitch livestream "Nothing, Forever" received a temporary ban for featuring a transphobic and homophobic dialogue segment intended as comedy.

💬 Incident #467: Google's Bard Shared Factually Inaccurate Info in Promo Video

  • What happened? Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.

💬 Incident #468: ChatGPT-Powered Bing Reportedly Had Problems with Factual Accuracy on Some Controversial Topics

  • What happened? Microsoft's ChatGPT-powered Bing search engine reportedly ran into factual accuracy problems when prompted about controversial matters, such as inventing the plot of a non-existent movie or creating conspiracy theories.

💬 Incident #470: Bing Chat Response Cited ChatGPT Disinformation Example

  • What happened? Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.

💬 Incident #473: Bing Chat's Initial Prompts Revealed by Early Testers Through Prompt Injection

  • What happened? Early testers of Bing Chat successfully used prompt injection to reveal its built-in initial instructions, which contains a list of statements governing ChatGPT's interaction with users.

💬 Incident #474: Users Reported Abrupt Behavior Changes of Their AI Replika Companions

  • What happened? Replika paid-subscription users reported unusual and sudden changes to behaviors of their "AI companions" such as forgetting memories with users or rejecting their sexual advances, which affected their connections and mental health.

💬 Incident #477: Bing Chat Tentatively Hallucinated in Extended Conversations with Users

  • What happened? Early testers reported Bing Chat, in extended conversations with users, having tendencies to make up facts and emulate emotions through an unintended persona.

Incident #479: Instagram Video Featured Deepfake Audio of US President Making Transphobic Remarks

  • What happened? A deepfaked audio of US President Joe Biden making transphobic remarks played on top of a video showing him giving a speech was released on Instagram and circulated on social media.

Incident #481: Deepfake TikTok Video Featured Joe Rogan Endorsing Supplement Brand

  • What happened? A deepfake video featuring podcast host Joe Rogan advertising to his listeners about a "libido-boosting" supplement was circulating on TikTok and other platforms before being removed by TikTok along with the account which posted it.

💬 Incident #482: ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students

  • What happened? Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing the student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".

Incident #483: Indian Police Allegedly Tortured and Killed Innocent Man Following Facial Misidentification

  • What happened? A resident in Medak, India died allegedly due to custodial torture by the local police, who misidentified him as a suspect in a theft case using facial recognition.

Incident #485: UK Bank's Voice ID Successfully Bypassed Using AI-Produced Audio

  • What happened? A UK journalist was able to successfully bypass Lloyds Bank's "Voice ID" program to access his bank account using an AI-generated audio of his own voice.

🗄 Trending in the AIID

Making AI tools accessible to the general public can result in a proliferation of unintended uses of the tool. One of the latest examples is the increase in use of generative AI to mimic peoples’ voices without their consent. Incidents 479 and 481 describe this technology being used to create Deepfakes of Joe Rogan and president Joe Biden. Incident 488 describes how the technology is allegedly being used to dox voice actors on Twitter. Finally, a recent Issue Report describes a new chatbot “NoiseGPT” that can turn text into celebrity voices, pointing to more potential incidents in the future.

The proliferation of Deepfakes can create a challenge in tracking incidents as many related reports continue to be published. We have developed a system to manage this using an  incident “variant”. Assuming the trend of audio deepfakes continues, we will have the ability to index what is likely to become a very large number of deepfake incidents.

👇 Diving Deeper

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook
  2. Submit incidents to the database
  3. Contribute to the database’s functionality