AIID Blog

AI Incident Roundup – October ‘22

Posted 2022-11-14 by Janet Schwartz & Khoa Lam.

Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.

Estimated reading time: 5 minutes

🗞️ New Incidents

Emerging incidents that occurred last month:

Incident #377: Weibo Model Has Difficulty Detecting Shifts in Censored Speech

  • What happened? The Chinese social media site Weibo's user moderation model has difficulty keeping up with shifting user slang in defiance of Chinese state censors.
  • How is the AI not working? Although the site says it has refined its “keyword identification model” to be able to filter the use of intentionally misspelled words and homophones, the diversity and ever-evolving nature of online language in China makes it unlikely its model will be able to fully ban controversial language.
  • What was the impact of this incident? Chinese citizens were able to undermine the efforts of Weibo to impose censorship of banned language, thus allowing them to discuss controversial topics such as government corruption.
  • Who was involved? Weibo developed and deployed an AI system, which harmed Weibo and the Chinese government.
  • 🚨 Editor's Note: The definition of alleged "harm" to a party does not indicate it is the responsibility of the broader community to mitigate or prevent that harm. Although this incident meets the criteria, database editors are making no claims about whether this incident should be prevented from recurring. The AIID indexes all incidents meeting its incident definition; our responsibility is to make such incidents known (e.g., Incident #13, where language toxicity models were shown to be easily fooled).

Incident #383: Google Home Mini Speaker Reportedly Read N-Word in Song Title Aloud

  • What happened? Google Home Mini speaker was reported by users for announcing aloud the previously-censored n-word in a song title. It is unclear when or how Google's speakers stopped censoring the n-word.
  • Who was involved? Google Home developed and deployed an AI system, which harmed Black Google Home Mini users and Google Home Mini users.

Incident #384: Glovo Driver in Italy Fired via Automated Email after Being Killed in Accident

  • What happened? Delivery company Glovo's automated system sent an email terminating an employee for "non-compliance terms and conditions" after the employee was killed in a car accident while making a delivery on Glovo's behalf.
  • Who was involved? Glovo developed and deployed an AI system, which harmed Sebastian Galassi and Sebastian Galassi's family.

Incident #385: Canadian Police's Release of Suspect's AI-Generated Facial Photo Reportedly Reinforced Racial Profiling

  • What happened? The Edmonton Police Service (EPS) in Canada released a facial image of a Black male suspect generated by an algorithm using DNA phenotyping, which was denounced by the local community as racial profiling.
  • How does the AI work? The AI system called Snapshot creates a composite facial sketch based on physical appearance attributes generated by DNA phenotyping is the process of predicting physical appearance and ancestry from unidentified DNA evidence.
  • How did this AI cause harm? DNA phenotyping composites are approximations of appearance, and it is not clear that the Snapshot profiles match their subjects. In this case since the AI-generated suspect was Black, it raised concerns about racial profiling in a marginalized community.
  • Who was involved? Parabon Nanolabs developed an AI system deployed by Edmonton Police Service, which harmed Black residents in Edmonton.

📎 New Developments

Older incidents that have new reports or updates.

Original incidentNew report(s)
Incident #376:RealPage's Algorithm Pushed Rent Prices High, Allegedly Artificially
  • How a Secret Rent Algorithm Pushes Rents Higher –

    ProPublica

    , October 15, 2022

  • Is an Algorithm Raising Your Rent? A New Class Action Lawsuit Says Yes -

    Gizmodo

    , October 21, 2022

  • RealPage’s YieldStar Software May Be Driving Up Rents -

    The Real Deal

    , October 17, 2022

Incident #373:

Michigan's Unemployment Benefits Algorithm MiDAS Issued False Fraud Claims for Thousands of People

  • The Seven-Year Struggle to Hold an Out-of-Control Algorithm to Account -

    The Markup

    , October 8, 2022

  • Michigan will settle 2015 unemployment false fraud lawsuit for $20 million -

    Detroit Free Press

    , October 20, 2022

Incident #267:

Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent

  • French regulator fines US face recognition firm Clearview AI €20 million,

    Le Monde

    , October 20, 2022

Incident #382:

Instagram's Exposure of Harmful Content Contributed to Teenage Girl’s Suicide

  • British Ruling Pins Blame on Social Media for Teenager’s Suicide -

    New York Times

    , October 1, 2022

🗄 From the Archives

Every edition, we feature one or more historical incidents that we find thought-provoking.

While in October we saw humans outwitting an automated system to avoid state censorship in China, other historical incidents recently added to the database highlight AI systems violating privacy in furtherance of state or commercial interests. An incident from 2019 echoes the concerns of state surveillance with the Ugandan government reportedly using facial recognition software to monitor political opposition. Meanwhile in the private sector, Uber allegedly violated drivers’ data privacy rights in order to monitor performance in 2020 and McDonald’s faced a lawsuit in 2021 for potentially violating Illinois privacy laws by collecting voice data through their drive-through chatbot.

Outside of the deliberate use of AI systems to collect and use private data, there are several previous examples of automated systems mistakenly collecting or sharing that data. In 2018, an Amazon Echo mistakenly sent a recorded private conversation between a husband and wife to one of the husband’s employees without either of their knowledge. GPT-2, the predecessor to GPT-3, was reportedly able to recite Personal Identifiable Information (PII) that it learned through training on massive amounts of data from the internet.

These occurrences from the last few years highlight common themes of privacy concerns that are increasingly concerns of legal systems providing rights to privacy and data protection.

– by Janet Schwartz

👇 Diving Deeper


🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook
  2. Submit incidents to the database
  3. Contribute to the database’s functionality