Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 612

関連インシデント

インシデント 3733 Report
Female Applicants Down-Ranked by Amazon Recruiting Tool

Amazon scraps 'sexist AI' recruiting tool that showed bias against women
msn.com · 2018

Amazon has scrapped a "sexist" internal tool that used artificial intelligence to sort through job applications.

The AI was created by a team at Amazon's Edinburgh office in 2014 as a way to automatically sort through CVs and pick out the most promising candidates.

However, it quickly taught itself to prefer male candidates over female ones, according to members of the team who spoke to Reuters.

They noticed that it was penalising CVs that included the word "women's," such as "women's chess club captain." It also reportedly downgraded graduates of two all-women's colleges.

Download the all-new Microsoft News app to receive up-to-the minute news from the world’s best sources – available now on iOS and Android

The problem stemmed from the fact that the system was trained on data submitted by people over a 10-year period, most of which came from men.

© AP Jeff Bezos, Amazon founder and CEO The AI was tweaked in an attempt to fix the bias. However, last year, Amazon lost faith in its ability to be neutral and abandoned the project altogether.

Amazon recruiters are believed to have used the system to look at the recommendations when hiring, but didn't rely on the rankings. Currently, women make up 40pc of Amazon's workforce.

Stevie Buckley, the co-founder of UK job website Honest Work, which is used by companies such as Snapchat to recruit for technology roles, said that “the basic premise of expecting a machine to identify strong job applicants based on historic hiring practices at your company is a surefire method to rapidly scale inherent bias and discriminatory recruitment practices.”

The danger of inherent bias in the use of algorithms is a common problem in the technology industry. Algorithms are not told to be biased, but can become unfair through the data they use.

© Getty Amazon logo Jessica Rose, a technical manager at education start-up FutureLearn and technology speaker, said that "the value of AI as it's used in recruitment today is limited by human bias."

"Developers and AI specialists carry the same biases as talent professionals, but we're often not asked to interrogate or test for these during the development process," she said.

Google had to remove the ability to search for photos of gorillas in its Google Photos app after the service began to suggest that photographs of people of colour were actually photographs of gorillas.

Amazon’s failed recruitment software and the issues with Google Photos illustrate one of the largest weaknesses of machine learning, where computers teach themselves to perform tasks by analysing data.

Gallery: 24 facts you may not know about Amazon founder Jeff Bezos (Photos)

Last month, IBM launched a tool which is designed to detect bias in AI. The Fairness 360 Kit allows developers to see clearly how their algorithms work and which pieces of data are used to make decisions.

“Considering Amazon's exhaustive resources and their exceptionally talented team of engineers,” Mr Buckley said, “the fact that their AI recruiting tool failed miserably suggests that we should maintain a default scepticism towards any organisation that claims to have produced an effective AI tool for recruitment.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e