Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 59: Gender Biases in Google Translate

概要: A Cornell University study in 2016 highlighted Google Translate's pattern of assigning gender to occupations in a way showing an implicit gender bias against women.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
推定: Googleが開発し提供したAIシステムで、Womenに影響を与えた

インシデントのステータス

インシデントID
59
レポート数
10
インシデント発生日
2017-04-13
エディタ
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv0 分類法のクラス

分類法の詳細

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Medium

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

User entered translation requests

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

59

AI Tangible Harm Level Notes

Notes about the AI tangible harm level assessment
 

Although AI was implicated in the adverse outcome, this incident has no tangible harm.

Notes (special interest intangible harm)

Input any notes that may help explain your answers.
 

The study found biases related to gender and age in Google Translate. Additional biases have been found in Natural Language Processing in general.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

Semantics derived automatically from language corpora contain human-like biases+3
Even artificial intelligence can acquire biases against race and gender
+2
Google Translate's gender bias pairs "he" with "hardworking" and "she" with lazy, and other examples
Assessing Gender Bias in Machine Translation -- A Case Study with Google TranslateGoogle Translate now gives feminine and masculine translationsFemale historians and male nurses do not exist, Google Translate tells its European users
Semantics derived automatically from language corpora contain human-like biases

Semantics derived automatically from language corpora contain human-like biases

arxiv.org

Even artificial intelligence can acquire biases against race and gender

Even artificial intelligence can acquire biases against race and gender

sciencemag.org

AI programs exhibit racial and gender biases, research reveals

AI programs exhibit racial and gender biases, research reveals

theguardian.com

Biased bots: Artificial-intelligence systems echo human prejudices

Biased bots: Artificial-intelligence systems echo human prejudices

princeton.edu

Google Translate's gender bias pairs "he" with "hardworking" and "she" with lazy, and other examples

Google Translate's gender bias pairs "he" with "hardworking" and "she" with lazy, and other examples

qz.com

Google Translate might have a gender problem

Google Translate might have a gender problem

mashable.com

The Algorithm That Helped Google Translate Become Sexist

The Algorithm That Helped Google Translate Become Sexist

forbes.com

Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate

Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate

researchgate.net

Google Translate now gives feminine and masculine translations

Google Translate now gives feminine and masculine translations

venturebeat.com

Female historians and male nurses do not exist, Google Translate tells its European users

Female historians and male nurses do not exist, Google Translate tells its European users

algorithmwatch.org

Semantics derived automatically from language corpora contain human-like biases
arxiv.org · 2016

Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunat…

Even artificial intelligence can acquire biases against race and gender
sciencemag.org · 2017

Even artificial intelligence can acquire biases against race and gender

One of the great promises of artificial intelligence (AI) is a world free of petty human biases. Hiring by algorithm would give men and women an equal chance at work, t…

AI programs exhibit racial and gender biases, research reveals
theguardian.com · 2017

Machine learning algorithms are picking up deeply ingrained race and gender prejudices concealed within the patterns of language use, scientists say

An artificial intelligence tool that has revolutionised the ability of computers to interpr…

Biased bots: Artificial-intelligence systems echo human prejudices
princeton.edu · 2017

In debates over the future of artificial intelligence, many experts think of these machine-based systems as coldly logical and objectively rational. But in a new study, Princeton University-based researchers have demonstrated how machines c…

Google Translate's gender bias pairs "he" with "hardworking" and "she" with lazy, and other examples
qz.com · 2017

In the Turkish language, there is one pronoun, “o,” that covers every kind of singular third person. Whether it’s a he, a she, or an it, it’s an “o.” That’s not the case in English. So when Google Translate goes from Turkish to English, it …

Google Translate might have a gender problem
mashable.com · 2017

So much of our life is determined by algorithms. From what you see on your Facebook News Feed, to the books and knickknacks recommended to you by Amazon, to the disturbing videos YouTube shows to your children, our attention is systematical…

The Algorithm That Helped Google Translate Become Sexist
forbes.com · 2018

Image via Twitter

Parents know one particular challenge of raising kids all too well: teaching them to do what we say, not what we do.

A similar challenge has hit artificial intelligence.

As more apps and software use AI to automate tasks, …

Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate
researchgate.net · 2018

Recently there has been a growing concern about machine bias, where trained statistical models grow to reflect controversial societal asymmetries, such as gender or racial bias. A significant number of AI tools have recently been suggested …

Google Translate now gives feminine and masculine translations
venturebeat.com · 2018

Google is making an effort to reduce perceived gender bias in Google Translate, it announced today. Starting this week, users who translate words and phrases in supported languages will get both feminine and masculine translations; “o bir d…

Female historians and male nurses do not exist, Google Translate tells its European users
algorithmwatch.org · 2020

An experiment shows that Google Translate systematically changes the gender of translations when they do not fit with stereotypes. It is all because of English, Google says.

If you were to read a story about male and female historians trans…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

High-Toxicity Assessed on Text Involving Women and Minority Groups

Google’s comment-ranking system will be a hit with the alt-right

Feb 2017 · 9 レポート
AI Beauty Judge Did Not Like Dark Skin

A beauty contest was judged by AI and the robots didn't like dark skin

Sep 2016 · 10 レポート
Biased Sentiment Analysis

Google's sentiment analysis API is just as biased as humans

Oct 2017 · 7 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

High-Toxicity Assessed on Text Involving Women and Minority Groups

Google’s comment-ranking system will be a hit with the alt-right

Feb 2017 · 9 レポート
AI Beauty Judge Did Not Like Dark Skin

A beauty contest was judged by AI and the robots didn't like dark skin

Sep 2016 · 10 レポート
Biased Sentiment Analysis

Google's sentiment analysis API is just as biased as humans

Oct 2017 · 7 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df