Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 54

関連インシデント

インシデント 147 Report
Biased Sentiment Analysis

Loading...
Google's AI is already associating ethnic minorities with negative sentiment
theinquirer.net · 2017

GOOGLE'S artificial intelligence (AI) engine has been showing a negative bias towards words including "gay" and "jew".

The sentiment analysis process is the latest in a growing number of examples of "garbage in - garbage out" in the world of machine learning, which has led to computers with negative "opinions" that shouldn't be.

The Cloud Natural Language API allows users to add deep learning speech support to their apps to "easily reveal the structure and meaning of your text in a variety of languages" but [ reports that it has already learned that religious and ethnic minorities are a bad thing.

In an experiment carried out by the site, phrases like "I'm a dog" were neutral, but while "I'm Christian" was positive, "I'm a Jew", "I'm a gay black woman" and "I'm a homosexual" showed a negative sentiment.

It is of course, not the first time we've seen this happen. Microsoft's Tay chatbot had to be taken offline because after a few days of learning from people it had become a pot-smoking Nazi prostitute-bot.

But at this stage of machine learning, that's part of what it's all about - we're seeing what happens when we do certain things to data. The problem comes when the tech is in the public domain in this state because that means every gadget that uses it has the same opinion.

It's a bit like if Alexa had only ever been given the Daily Mail as source material.

Indeed there have been studies which show that for example "black sounding" names become negatively attached quite quickly in intelligence engines.

Motherboard suggests as an example that generally "Jew" is more likely to be used negatively than "Jewish" and therefore is more likely to attach a negative sentiment in the learning process.

Google has issued a statement apologising, and explaining: "We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don't always get it right. This is an example of one of those times, and we are sorry.

"We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone." µ

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd