Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 625

関連インシデント

インシデント 3733 Report
Female Applicants Down-Ranked by Amazon Recruiting Tool

Amazon Shuts Down Secret AI Recruiting Tool That Taught Itself to be Sexist
interestingengineering.com · 2018

Artificial intelligence (AI) human resourcing tools are all the rage at the moment and becoming increasingly popular. The systems can speed up, simplify and even decrease the cost of the hiring process becoming every recruiter's dream come true.

But as we have witnessed before, AI-powered systems can also, at times, exhibit potentially dangerous biased tendencies. Last July, non-profit watchdog the American Civil Liberties Union of Northern California was shocked to find flaws in Amazon's facial recognition technology called Rekognition that could possibly lead to racially-based false identifications.

The discovery led to public outcry regarding the system. However, Amazon defended its program and said the deployment of new tech could not be thwarted because some "could choose to abuse" it, implying any issues with Rekognition were related to user misuse.

Sexist AI scrapped

Now, it seems another AI tool, this time a human resources one, has somehow taught itself to be sexist. According to a report by Reuters, a secret internal project by Amazon, that was trying to use AI to vet job applications, had to be scrapped after it was found to be downgrading female candidates.

AI Politicians Angry After Amazon Facial Recognition AI Falsely Matches 28 Congress Members to Criminals

Amazon's machine learning experts had been working on computer programs since 2014 that would be able to review and sort out applicants’ resumes. The system worked by assigning potential candidates scores ranging from one to five stars.

“Everyone wanted this holy grail,” one of the people working on the AI project told Reuters. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

However, by 2015, the specialists found the system was making problematic gender-related candidate choices when it came to software developers and other technical positions. The problem, believe the experts, arose because the computer models vetting process was derived from past hiring patterns achieved over a 10-year period.

Teaching itself sexism

And since that period saw the tech industry be particularly male-dominated, the model inadvertently trained itself to prefer male candidates over female ones. It essentially trained itself to be sexist.

The program has reportedly since been scrapped, eliminating any potential associated negative consequences. But the story illustrates the dangers of relying on past data to create new models.

Advertisement

"Every human on the planet is biased. We all carry an incurable virus called our unconscious bias. I firmly believe that AI is the only way we can reduce human bias with recruiting. Unfortunately, naively using AI for recruiting is a guaranteed recipe for disaster. Racist/Sexist AI isn’t a risk, it is a guarantee if used by inexperienced teams. AI will naturally learn our bias and amplify it," explained to IE chief AI officer at Ziff Ben Taylor.

Still, one should not throw out the baby with the bathwater. There are ways to get the benefits of AI without the biases. "There are plenty of companies outside the media spotlight using AI responsibly, where they have spent millions in adverse impact protections/research to prevent something like this from happening," further added Taylor.

Advertisement

From detecting cancers to making treatments less toxic, the technology seems to be finding increasingly new and very helpful uses. And with organizations seeking to curb and tame any potentially unpleasant outcomes, it is a good guess that, if embraced properly, AI can be one of humanity's biggest assets.

We just have to remain vigilant and aware. Judging by the swift disposal of Amazon's misogynistic recruiting tool, we seem to be on track to keeping our AI mankind-friendly.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e