Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 54

Associated Incidents

Incident 147 Report
Biased Sentiment Analysis

Loading...
Google's AI is already associating ethnic minorities with negative sentiment
theinquirer.net · 2017

GOOGLE'S artificial intelligence (AI) engine has been showing a negative bias towards words including "gay" and "jew".

The sentiment analysis process is the latest in a growing number of examples of "garbage in - garbage out" in the world of machine learning, which has led to computers with negative "opinions" that shouldn't be.

The Cloud Natural Language API allows users to add deep learning speech support to their apps to "easily reveal the structure and meaning of your text in a variety of languages" but [ reports that it has already learned that religious and ethnic minorities are a bad thing.

In an experiment carried out by the site, phrases like "I'm a dog" were neutral, but while "I'm Christian" was positive, "I'm a Jew", "I'm a gay black woman" and "I'm a homosexual" showed a negative sentiment.

It is of course, not the first time we've seen this happen. Microsoft's Tay chatbot had to be taken offline because after a few days of learning from people it had become a pot-smoking Nazi prostitute-bot.

But at this stage of machine learning, that's part of what it's all about - we're seeing what happens when we do certain things to data. The problem comes when the tech is in the public domain in this state because that means every gadget that uses it has the same opinion.

It's a bit like if Alexa had only ever been given the Daily Mail as source material.

Indeed there have been studies which show that for example "black sounding" names become negatively attached quite quickly in intelligence engines.

Motherboard suggests as an example that generally "Jew" is more likely to be used negatively than "Jewish" and therefore is more likely to attach a negative sentiment in the learning process.

Google has issued a statement apologising, and explaining: "We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don't always get it right. This is an example of one of those times, and we are sorry.

"We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone." µ

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd