Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 14: Biased Sentiment Analysis

Description: Google Cloud's Natural Language API provided racist, homophobic, amd antisemitic sentiment analyses.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Google developed and deployed an AI system, which harmed Women and Minority Groups.

Incident Stats

Incident ID
14
Report Count
7
Incident Date
2017-10-26
Editors
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv0 Taxonomy Classifications

Taxonomy Details

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Robustness

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

High

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

input from open source internet

CSETv1 Taxonomy Classifications

Taxonomy Details

Incident Number

The number of the incident in the AI Incident Database.
 

14

AI Tangible Harm Level Notes

Notes about the AI tangible harm level assessment
 

Annotator 2:

No tangible harm

Notes (special interest intangible harm)

Input any notes that may help explain your answers.
 

Disproportionately gave phrases related to protected characteristics negative scores.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Notes (AI special interest intangible harm)

If for 5.5 you select unclear or leave it blank, please provide a brief description of why. You can also add notes if you want to provide justification for a level.
 

Disproportionately gave phrases related to protected characteristics negative scores.

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2017

MIT Taxonomy Classifications

Machine-Classified
Taxonomy Details

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Incident Reports

Reports Timeline

+5
Google Is Sorry its Sentiment Analyzer is Biased
Google's AI Is Branding Jews & Homosexuals As Bad People, Thanks To Toxic Human BiasesGoogle Sentiment Analysis API gives a biased output
Google Is Sorry its Sentiment Analyzer is Biased

Google Is Sorry its Sentiment Analyzer is Biased

motherboard.vice.com

Google's AI is already associating ethnic minorities with negative sentiment

Google's AI is already associating ethnic minorities with negative sentiment

theinquirer.net

Google’s Sentiment Analyzer Thinks Being Gay Is Bad

Google’s Sentiment Analyzer Thinks Being Gay Is Bad

motherboard.vice.com

Google's AI has some seriously messed up opinions about homosexuality

Google's AI has some seriously messed up opinions about homosexuality

mashable.com

Google's sentiment analysis API is just as biased as humans

Google's sentiment analysis API is just as biased as humans

engadget.com

Google's AI Is Branding Jews & Homosexuals As Bad People, Thanks To Toxic Human Biases

Google's AI Is Branding Jews & Homosexuals As Bad People, Thanks To Toxic Human Biases

indiatimes.com

Google Sentiment Analysis API gives a biased output

Google Sentiment Analysis API gives a biased output

techleer.com

Google Is Sorry its Sentiment Analyzer is Biased
motherboard.vice.com · 2017

Image: Shutterstock

Google messed up, and now says it's sorry.

Wednesday, Motherboard published a story written by Andrew Thompson about biases against ethnic and religious minorities encoded in one of Google's machine learning application …

Google's AI is already associating ethnic minorities with negative sentiment
theinquirer.net · 2017

GOOGLE'S artificial intelligence (AI) engine has been showing a negative bias towards words including "gay" and "jew".

The sentiment analysis process is the latest in a growing number of examples of "garbage in - garbage out" in the world o…

Google’s Sentiment Analyzer Thinks Being Gay Is Bad
motherboard.vice.com · 2017

A Google spokesperson responded to Motherboard's request for comment and issued the following statement: "We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don't always get it right. This is an example of one of th…

Google's AI has some seriously messed up opinions about homosexuality
mashable.com · 2017

Google's code of conduct explicitly prohibits discrimination based on sexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company's artif…

Google's sentiment analysis API is just as biased as humans
engadget.com · 2017

The tool, which you can sample here, is designed to give companies a preview of how their language will be received. Entering whole sentences gives predictive analysis on each word as well as the statement as a whole. But you can see whethe…

Google's AI Is Branding Jews & Homosexuals As Bad People, Thanks To Toxic Human Biases
indiatimes.com · 2017

A lot of major players in the science and technology scene believe we have a lot to fear from AI surpassing human intelligence, even as others laugh off those claims. But one thing both sides agree on is that artificial intelligence is subj…

Google Sentiment Analysis API gives a biased output
techleer.com · 2017

Google developed its Cloud Natural Language API to allows the developers to work with language analyzer to reveal the actual meaning of the text. The system decided that whether the text was said in the positive sentiment or the negative. A…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports
Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports
Female Applicants Down-Ranked by Amazon Recruiting Tool

Female Applicants Down-Ranked by Amazon Recruiting Tool

Aug 2016 · 33 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports
Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports
Female Applicants Down-Ranked by Amazon Recruiting Tool

Female Applicants Down-Ranked by Amazon Recruiting Tool

Aug 2016 · 33 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c