Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 622

関連インシデント

インシデント 3734 Report
Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Loading...
Amazon Killed Its AI Recruitment System For Bias Against Women-Report
fortune.com · 2018

Machine learning, one of the core techniques in the field of artificial intelligence, involves teaching automated systems to devise new ways of doing things, by feeding them reams of data about the subject at hand. One of the big fears here is that biases in that data will simply be reinforced in the AI systems—and Amazon seems to have just provided an excellent example of that phenomenon.

According to a new Reuters report, Amazon spent years working on a system for automating the recruitment process. The idea was for this AI-powered system to be able to look at a collection of resumes and name the top candidates. To achieve this, Amazon fed the system a decade’s worth of resumes from people applying for jobs at Amazon.

The tech industry is famously male-dominated and, accordingly, most of those resumes came from men. So, trained on that selection of information, the recruitment system began to favor men over women.

According to Reuters’ sources, Amazon’s system taught itself to downgrade resumes with the word “women’s” in them, and to assign lower scores to graduates of two women-only colleges. Meanwhile, it decided that words such as “executed” and “captured,” which are apparently deployed more often in the resumes of male engineers, suggested the candidate should be ranked more highly.

The team tried to stop the system from taking such factors into account, but ultimately decided that it was impossible to stop it from finding new ways to discriminate against female candidates. There were apparently also issues with the underlying data that led the system to spit out rather random recommendations.

And so, Amazon reportedly killed the project at the start of 2017.

“This was never used by Amazon recruiters to evaluate candidates,” Amazon said in a statement.

Amazon isn’t the only company to be alert to the problem of algorithmic bias. Earlier this year, Facebook said it was testing a tool called Fairness Flow, for spotting racial, gender or age biases in machine-learning algorithms. And what was the first target for Facebook’s tests of the new tool? Its algorithm for matching job-seekers with companies advertising positions.

This article was updated to include Amazon’s statement.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 26ad817