Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 619

関連インシデント

インシデント 3734 Report
Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Loading...
Amazon abandoned sexist AI recruitment tool
channels.theinnovationenterprise.com · 2018

Amazon decided to scrap a machine learning (ML) algorithm it was creating to help automate the recruitment process because the model kept favoring male candidates, Reuters revealed. The discrimination against female candidates has been put down to the largely male-dominated data sets it had been trained with.

The project, which was scrapped in 2017 was meant to be able to review job applications and assign a score to each candidate between one and five stars. "They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those," claimed one of the five team members who had worked on the tool that Reuters spoke to.

Visit Innovation Enterprise's Machine Learning Innovation Summit in New York on December 12–13, 2018

The team had worked on the recruitment algorithm since 2014, training it on resume's that covered a 10-year period. However, because the tech industry notoriously male-dominated, most of the resumes it was trained on came from men. This led the AI to begin favoring male candidates in its assessment simply by virtue of them being male, penalizing CV's simply for featuring the word "women".

Concerns around the impact biased data sets are having on AI training is becoming more and more of an issue as AI research continues to accelerate. Earlier this year, MIT researchers attempted to illustrate the impact datasets can have by creating the world's first psychopath AI . This incident with Amazon shows how easy it is to inadvertently pass on biases to the tech they are training for the explicit purpose of being impartial.

Amazon has so far declined to comment on the Reuters report.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd