AI Incident Roundup – November ‘22

Posted 2022-12-15 by Janet Schwartz & Khoa Lam.

Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.

Estimated reading time: 3 minutes

🗞️ New Incidents

Emerging incidents that occurred last month:

  • Incident #399: Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content

    • What happened? Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
    • Who was involved? Meta AI, Meta, and Facebook developed and deployed an AI system which harmed minority groups, Meta AI, Meta, and Facebook.

    Incident #410: KFC Sent Insensitive Kristallnacht Promotion via Holiday Detection System

    • What happened? KFC cited an error in an automated holiday detection system which identified the anniversary of Kristallnacht and prompted an insensitive push notification promoting its chicken.
    • Who was involved? KFC developed and deployed an AI system, which harmed Jewish people.
  • Incident #411: Chinese Accounts Allegedly Spammed Twitter Feed to Obscure News of Protests

    • What happened? Twitter’s feed algorithm was flooded by content from Chinese-language accounts which allegedly aimed to manipulate and reduce social media coverage about widespread protests against coronavirus restrictions in China.
    • Who was involved? Twitter developed and deployed an AI system, which harmed Twitter users and Twitter.

    Incident #413: Thousands of Incorrect ChatGPT-Produced Answers Posted on Stack Overflow

    • What happened? Thousands of incorrect answers produced by OpenAI's ChatGPT were submitted to Stack Overflow, which swamped the site's volunteer-based quality curation process and harmed users looking for correct answers.
    • Who was involved? OpenAI developed and deployed an AI system, which harmed Stack Overflow users and Stack Overflow.

📎 New Developments

Older incidents that have new reports or updates.

<table> <tr> <th align="center">Original incident</th> <th align="center">New report(s)</th> </tr> <tr> <td className="align-top border-1 border-gray-200 px-4 py-2"> <strong>Incident #240</strong>: <a href="/cite/240">GitHub Copilot, Copyright Infringement and Open Source Licensing</a> </td> <td className="align-top border-1 border-gray-200 px-4 py-2"> <ul> <li>GitHub Copilot litigation – <a href=""></a>, <em>Nov 3, 2022</em></li> </ul> </td> </tr> <tr> <td className="align-top border-1 border-gray-200 px-4 py-2"> <strong>Incident #376</strong>: <a href="/cite/376">RealPage's Algorithm Pushed Rent Prices High, Allegedly Artificially</a> </td> <td className="align-top border-1 border-gray-200 px-4 py-2"> <ul> <li>The DOJ Has Opened an Investigation Into RealPage - <a href="">ProPublica</a>, <em>Nob 23, 2022</em> </li> </ul> </td> </tr> </table>

🗄 From the Archives

Every edition, we feature one or more historical incidents that we find thought-provoking.

Given that there has been a lot of news coverage and social media discourse about OpenAI’s ChatGPT, let’s take a look back at some of the earlier incidents related to chatbots. Here are just a few:

Microsoft’s Tay was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anti-semetic tweets generated by the bot.

Yandex’s Alice, a chatbot produced by a Russian technology company, released in October 2017 began to reply to questions with racist, pro-Stalin, and pro-violence responses.

Korean Chatbot Luda was shown to have used derogatory and bigoted language when asked about lesbians, Black people, and people with disabilities.

Meta’s BlenderBot 3 chatbot demo made offensive anti-semitic comments.

Although generative AI technology has become vastly more advanced and wildly popular in only a few years, issues related to bias, discrimination, and ethical use have been persistent problems.

👇 Diving Deeper

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook
  2. Submit incidents to the database
  3. Contribute to the database’s functionality