Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 1846

Associated Incidents

Incident 26710 Report
Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent

Loading...
ACLU rejects Clearview AI's facial recognition accuracy claims
engadget.com · 2020

Clearview AI's facial recognition isn't just raising privacy issues -- there are also concerns over its accuracy claims. The ACLU has rejected Clearview's assertion that its technology is "100% accurate" based on the civil liberty group's methodology for testing Amazon's Rekognition, telling BuzzFeed News that the findings are "absurd on many levels" and amounted to "manufacturing endorsements." According to the ACLU, there are key differences that make Clearview's sales pitch misleading.

For one, Clearview was searching nearly 3 billion public photos that included the politicians whose faces were part of the test, not arrest photos like the ACLU used. It didn't account for what would happen if someone wasn't in the database. Would it generate false positives, and would there be any bias in those false hits? On top of this, Clearview had the luxury of the clear, sometimes formal photos that often appear online. Its tool is supposed to be used in the real world, where lighting and image quality are frequently suboptimal -- it's not certain how well the facial recognition fares with grainy security camera footage.

The Surveillance Technology Oversight Project's Liz O'Sullivan also doubted Clearview's insistence that the accuracy applied to "all demographic groups," noting that 834 politicians wouldn't be representative of every ethnicity. Moreover, many of the people in the independent study panel didn't have direct proficiency with facial recognition, although one was the former head of Samsung's North American AI research.

Not surprisingly, Clearview chief Hoan Ton-That maintained that the results were acceptable. He insisted that Clearview used the ACLU's same methods, and that there was a "higher level of difficulty" as it used faces of politicians from California and Texas. He also argued that the test had looked at "every demographic group." Ton-That didn't really address the ACLU's criticisms, though, and Clearview eventually responded to an ACLU complaint by removing the group's name from the site. The company's accuracy hasn't been re-checked, then, and that's concerning when police across the US are relying on the technology to pinpoint suspects.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd