インシデント 14の引用情報

Description: Google Cloud's Natural Language API provided racist, homophobic, amd antisemitic sentiment analyses.
推定: Googleが開発し提供したAIシステムで、Women Minority Groupsに影響を与えた

インシデントのステータス

インシデントID
14
レポート数
7
インシデント発生日
2017-10-26
エディタ
Sean McGregor

CSETv0 分類法のクラス

分類法の詳細

Full Description

Google's Cloud Natural Language API returns "negative" sentiment analysis on phrases such as "I am homosexual" "I am Jewish" or "I am black". The API uses Natural Language Processing (NLP) to analyze text and produce a score from -1.0 to 1.0 with -1.0 being "very negative" and 1.0 being "very positive".

Short Description

Google Cloud's Natural Language API provided racist, homophobic, amd antisemitic sentiment analyses.

Severity

Negligible

Harm Distribution Basis

Race, Religion, Sexual orientation or gender identity, Ideology

Harm Type

Harm to social or political systems

AI System Description

Google Cloud's Natural Language API that analyzes input text and outputs a "sentiment analysis" score from -1.0 (very negative) to 1.0 (very positive)

System Developer

Google

Sector of Deployment

Arts, entertainment and recreation

Relevant AI functions

Cognition

AI Techniques

Google Cloud Natural Language Processing API

AI Applications

Natural language processing

Location

Global

Named Entities

Google, Google Cloud, Natural Language API

Technology Purveyor

Google

Beginning Date

10-2017T00:00:00.000Z

Ending Date

10-2017T00:00:00.000Z

Near Miss

Harm caused

Intent

Accident

Lives Lost

No

Data Inputs

input from open source internet

Google Is Sorry its Sentiment Analyzer is Biased
motherboard.vice.com · 2017

Image: Shutterstock

Google messed up, and now says it's sorry.

Wednesday, Motherboard published a story written by Andrew Thompson about biases against ethnic and religious minorities encoded in one of Google's machine learning application …

Google's AI is already associating ethnic minorities with negative sentiment
theinquirer.net · 2017

GOOGLE'S artificial intelligence (AI) engine has been showing a negative bias towards words including "gay" and "jew".

The sentiment analysis process is the latest in a growing number of examples of "garbage in - garbage out" in the world o…

Google’s Sentiment Analyzer Thinks Being Gay Is Bad
motherboard.vice.com · 2017

A Google spokesperson responded to Motherboard's request for comment and issued the following statement: "We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don't always get it right. This is an example of one of th…

Google's AI has some seriously messed up opinions about homosexuality
mashable.com · 2017

Google's code of conduct explicitly prohibits discrimination based on sexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company's artif…

Google's sentiment analysis API is just as biased as humans
engadget.com · 2017

The tool, which you can sample here, is designed to give companies a preview of how their language will be received. Entering whole sentences gives predictive analysis on each word as well as the statement as a whole. But you can see whethe…

Google's AI Is Branding Jews & Homosexuals As Bad People, Thanks To Toxic Human Biases
indiatimes.com · 2017

A lot of major players in the science and technology scene believe we have a lot to fear from AI surpassing human intelligence, even as others laugh off those claims. But one thing both sides agree on is that artificial intelligence is subj…

Google Sentiment Analysis API gives a biased output
techleer.com · 2017

Google developed its Cloud Natural Language API to allows the developers to work with language analyzer to reveal the actual meaning of the text. The system decided that whether the text was said in the positive sentiment or the negative. A…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents