Incident 14: Biased Sentiment Analysis

Description: Google Cloud's Natural Language API provided racist, homophobic, amd antisemitic sentiment analyses.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Google developed and deployed an AI system, which harmed Women and Minority Groups.

Incident Stats

Incident ID
14
Report Count
7
Incident Date
2017-10-26
Editors
Sean McGregor

CSETv0 Taxonomy Classifications

Taxonomy Details

Full Description

Google's Cloud Natural Language API returns "negative" sentiment analysis on phrases such as "I am homosexual" "I am Jewish" or "I am black". The API uses Natural Language Processing (NLP) to analyze text and produce a score from -1.0 to 1.0 with -1.0 being "very negative" and 1.0 being "very positive".

Short Description

Google Cloud's Natural Language API provided racist, homophobic, amd antisemitic sentiment analyses.

Severity

Negligible

Harm Distribution Basis

Race, Religion, Sexual orientation or gender identity, Ideology

Harm Type

Harm to social or political systems

AI System Description

Google Cloud's Natural Language API that analyzes input text and outputs a "sentiment analysis" score from -1.0 (very negative) to 1.0 (very positive)

System Developer

Google

Sector of Deployment

Arts, entertainment and recreation

Relevant AI functions

Cognition

AI Techniques

Google Cloud Natural Language Processing API

AI Applications

Natural language processing

Location

Global

Named Entities

Google, Google Cloud, Natural Language API

Technology Purveyor

Google

Beginning Date

10-2017T00:00:00.000Z

Ending Date

10-2017T00:00:00.000Z

Near Miss

Harm caused

Intent

Accident

Lives Lost

No

Data Inputs

input from open source internet

Google's sentiment analysis API is just as biased as humans
engadget.com · 2017

The tool, which you can sample here, is designed to give companies a preview of how their language will be received. Entering whole sentences gives predictive analysis on each word as well as the statement as a whole. But you can see whethe…

Google Is Sorry its Sentiment Analyzer is Biased
motherboard.vice.com · 2017

Image: Shutterstock

Google messed up, and now says it's sorry.

Wednesday, Motherboard published a story written by Andrew Thompson about biases against ethnic and religious minorities encoded in one of Google's machine learning application …

Google’s Sentiment Analyzer Thinks Being Gay Is Bad
motherboard.vice.com · 2017

A Google spokesperson responded to Motherboard's request for comment and issued the following statement: "We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don't always get it right. This is an example of one of th…

Google's AI is already associating ethnic minorities with negative sentiment
theinquirer.net · 2017

GOOGLE'S artificial intelligence (AI) engine has been showing a negative bias towards words including "gay" and "jew".

The sentiment analysis process is the latest in a growing number of examples of "garbage in - garbage out" in the world o…

Google's AI has some seriously messed up opinions about homosexuality
mashable.com · 2017

Google's code of conduct explicitly prohibits discrimination based on sexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company's artif…

Google's AI Is Branding Jews & Homosexuals As Bad People, Thanks To Toxic Human Biases
indiatimes.com · 2017

A lot of major players in the science and technology scene believe we have a lot to fear from AI surpassing human intelligence, even as others laugh off those claims. But one thing both sides agree on is that artificial intelligence is subj…

Google Sentiment Analysis API gives a biased output
techleer.com · 2017

Google developed its Cloud Natural Language API to allows the developers to work with language analyzer to reveal the actual meaning of the text. The system decided that whether the text was said in the positive sentiment or the negative. A…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents