Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 605

関連インシデント

インシデント 3734 Report
Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Loading...
Amazon's AI hiring tool discriminated against women.
slate.com · 2018

Amazon sign, with dude. David Ryder/Getty Images

Thanks to Amazon, the world has a nifty new cautionary tale about the perils of teaching computers to make human decisions.

According to a Reuters report published Wednesday, the tech giant decided last year to abandon an “experimental hiring tool” that used artificial intelligence to rate job candidates, in part because it discriminated against women. Recruiters reportedly looked at the recommendations the program spat out while searching for talent, “but never relied solely on those rankings.”

The misadventure began in 2014, when a group of Amazon engineers in Scotland set out to mechanize the company’s head-hunting process, by creating a program that would scour the Internet for worthwhile job candidates (and presumably save Amazon’s HR staff some soul crushing hours clicking around LinkedIn). “Everyone wanted this holy grail,” a source told Reuters. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

It didn’t pan out that way. In 2015, the team realized that its creation was biased in favor of men when it came to hiring technical talent, like software developers. The problem was that they trained their machine learning algorithms to look for prospects by recognizing terms that had popped up on the resumes of past job applicants—and because of the tech world’s well-known gender imbalance, those past hopefuls tended to be men.

“In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded graduates of two all-women’s colleges,” Reuters reported. The program also decided that basic tech skills, like the ability to write code, which popped up on all sorts of resumes, weren’t all that important, but grew to like candidates who littered their resumes with macho verbs such as “executed” and “captured.”

After years of trying to fix the project, Amazon brass reportedly “lost hope“ and shuttered the effort in 2017.

All of this is a remarkably clear-cut illustration of why many tech experts are worried that, rather than remove human biases from important decisions, artificial intelligence will simply automate them. An investigation by ProPublica, for instance, found that algorithms judges use in criminal sentencing may dole out harsher penalties to black defendants than white ones. Google Translate famously introduced gender biases into its translations. The issue is that these programs learn to spot patterns and make decisions by analyzing massive data sets, which themselves are often a reflection of social discrimination. Programmers can try to tweak the A.I. to avoid those undesirable results, but they may not think to, or be successful even if they try.

Amazon deserves some credit for realizing its tool had a problem, trying to fix it, and eventually moving on (assuming it didn’t have a serious impact on the company’s recruiting over the last few years). But, at a time when lots of companies are embracing artificial intelligence for things like hiring, what happened at Amazon really highlights that using such technology without unintended consequences is hard. And if a company like Amazon can’t pull it off without problems, it’s difficult to imagine that less sophisticated companies can.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd