Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 619

Associated Incidents

Incident 3734 Report
Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Loading...
Amazon abandoned sexist AI recruitment tool
channels.theinnovationenterprise.com · 2018

Amazon decided to scrap a machine learning (ML) algorithm it was creating to help automate the recruitment process because the model kept favoring male candidates, Reuters revealed. The discrimination against female candidates has been put down to the largely male-dominated data sets it had been trained with.

The project, which was scrapped in 2017 was meant to be able to review job applications and assign a score to each candidate between one and five stars. "They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those," claimed one of the five team members who had worked on the tool that Reuters spoke to.

Visit Innovation Enterprise's Machine Learning Innovation Summit in New York on December 12–13, 2018

The team had worked on the recruitment algorithm since 2014, training it on resume's that covered a 10-year period. However, because the tech industry notoriously male-dominated, most of the resumes it was trained on came from men. This led the AI to begin favoring male candidates in its assessment simply by virtue of them being male, penalizing CV's simply for featuring the word "women".

Concerns around the impact biased data sets are having on AI training is becoming more and more of an issue as AI research continues to accelerate. Earlier this year, MIT researchers attempted to illustrate the impact datasets can have by creating the world's first psychopath AI . This incident with Amazon shows how easy it is to inadvertently pass on biases to the tech they are training for the explicit purpose of being impartial.

Amazon has so far declined to comment on the Reuters report.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd