Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2133

関連インシデント

インシデント 954 Report
Job Screening Service Halts Facial Analysis of Applicants

Loading...
Complaint and Request for Investigation, Injunction, and Other Relief
context-cdn.washingtonpost.com · 2019

I. Summary

This complaint concerns a company that purports to evaluate a job applicant’s qualifications based upon their appearance by means of an opaque, proprietary algorithm. HireVue, a firm located in Utah, provides theses “assessments” to companies seeking to fill job openings, which in turn make hiring determinations that impact the employment opportunities of specific individuals. The company denies that it is engaged in facial recognition and has failed to show that its technique meets the minimal standards for AI-based decision-making set out in the OECD AI Principles or the recommended standards set out in the Universal Guidelines for AI. The company has engaged in unfair and or deceptive trade practices, in violation of the Section 5 of the FTC Act. For the reasons, set out below the Commission should open an investigation, issue an injunction, and provide such other relief as EPIC has proposed.

VI. HireVue’s Violations of the FTC Act

A. HireVue’s Deceptive Use of Facial Recognition Technology

As described above, HireVue collects facial data in its video interviews of job candidate.

According to the FTC, the term “facial recognition technology” includes “technologies that merely detect basic human facial geometry; technologies that analyze facial geometry to predict demographic characteristics, expression, or emotions; and technologies that measure unique facial biometrics.”

HireVue therefore uses “facial recognition technology” in its video-based assessments of job candidates, as defined by the FTC.

HireVue represents to job candidates that it “does not use facial recognition technology or track facial features for identity recognition purposes.”

HireVue “lacks a ‘reasonable basis’” to support this claim.

HireVue is therefore engaged in a deceptive trade practice in violation of the Federal Trade Commission Act, 15 U.S.C. §§ 45(a)(1).

The FTC has found deceptive uses of facial recognition to be a violation of the FTC Act.

In April 2018, EPIC and a coalition of consumer organizations filed a complaint highlighting Facebook’s practice of “routinely scan[ning] photos for biometric facial matches without the consent of the image subject”—an unfair and deceptive trade practice and a violation of the Commission’s 2011 Facebook consent order.

In July 2019, the Commission determined that Facebook’s use of facial recognition technology was “[d]eceptive” and “misrepresent[ed] ‘the extent to which a consumer can control the privacy’” of their facial data in violation of the 2011 consent order.

B. HireVue’s Unfair Use of Facial Recognition, Biometric Data, and AI Systems

As described above, HireVue uses facial recognition technology, biometric data, and secret algorithms to purportedly assess the “cognitive ability,” “psychological traits,” “emotional intelligence,” and “social aptitudes” of job candidates.

HireVue’s use of secret algorithms to analyze job candidates’ biometric data violates widelyadopted ethical standards for the use of artificial intelligence (“AI”) and is “unfair” within the meaning of the FTC Act.

i. HireVue’s Algorithmic Assessments Violate the OECD Principles on AI

HireVue’s algorithmic assessments of job candidates are not transparent.

HireVue’s algorithmic assessments of job candidates cannot be evaluated or understood by the candidates.

HireVue’s algorithmic assessments of job candidates cannot be meaningfully challenged.

HireVue cannot be held accountable for the proper functioning of its secret algorithmic assessments.

HireVue has therefore violated the OECD Principles on Artificial Intelligence.

ii. HireVue’s Algorithmic Assessments Violate the Universal Guidelines for AI

HireVue does not provide job candidates with access to the training data, factors, logic, or techniques used to generate each algorithmic assessment.

HireVue has not adequately evaluated whether the purpose, objectives, and benefits of its algorithmic assessments outweigh the risks.

HireVue has not ensured the accuracy of its algorithmic assessments.

HireVue has not ensured the reliability of its algorithmic assessments.

HireVue has not ensured the validity of its algorithmic assessments.

HireVue has not established that the assessments are free of unfair bias and impermissible discrimination.

HireVue has therefore violated the Universal Guidelines for Artificial Intelligence.

iii. HireVue’s Algorithmic Assessments Are ‘Unfair’ Under the FTC Act

HireVue’s use of biometric data and secret algorithms is “unfair” because it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”

HireVue’s use of biometric data and secret algorithms causes or is likely to cause substantial injury to a large class of people—namely, job candidates seeking to contract with one of the 700 companies that rely on HireVue’s assessments.

HireVue claims to collect “tens of thousands” of biometric data points through its assessments, including but not limited to a job candidate’s “intonation,” “inflection,” and “emotions.”

HireVue inputs these personal data points into secret “predictive algorithms” that allegedly determine each job candidate’s “employability.” Companies then rely on HireVue’s assessments to determine whether to contract for the services of each job candidate.

Because these algorithms are secret—even to HireVue itself, in some cases—it is impossible for job candidates to know how their personal data is being used or to consent to such uses.

HireVue’s intrusive collection and secret analysis of biometric data thus causes substantial privacy harms to job candidates.

HireVue’s intrusive collection and secret analysis of biometric data also causes substantial financial harms to job candidates. Many job candidates are denied opportunities to contract with companies based on HireVue’s algorithmic assessments, and many of those same candidates are forced to expend significant resources to identify alternate contracting opportunities.

The injuries caused by HireVue’s use of biometric data and secret algorithms cannot be reasonably avoided. HireVue’s video-based and game-based assessments are used by 700 companies,” and job candidates are not given an opportunity to opt out of or meaningfully challenge HireVue’s assessments.

The harms caused by HireVue’s use of biometric data and secret algorithms are not outweighed by countervailing benefits to consumers or to competition. HireVue has failed to demonstrate any legitimate purpose for the collection of job candidates’ biometric data or for the use of secret, unproven algorithms to assess the “cognitive ability,” “psychological traits,” “emotional intelligence,” and “social aptitudes” of job candidates.

Other methods that accomplish the goal of evaluating job candidates are readily available and have long been in use.

HireVue is therefore engaged in an unfair trade practice in violation of the Federal Trade Commission Act, 15 U.S.C. §§ 45(a)(1).

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd