Incident 267: Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent

Description: Face-matching algorithm by Clearview AI was built using scraped images from social media sites such as Instagram and Facebook without user consent, violating social media site policies, and allegedly privacy regulations.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Clearview AI developed and deployed an AI system, which harmed social media users , Instagram users and Facebook users.

Incident Stats

Incident ID
267
Report Count
10
Incident Date
2017-06-15
Editors
Khoa Lam
The Secretive Company That Might End Privacy as We Know It
nytimes.com · 2020

Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s distinctive yellow hair on their own photos.

Then Mr. Ton-That — an Australian techie and onetime model — did someth…

Scraping the Web Is a Powerful Tool. Clearview AI Abused It
wired.com · 2020

The internet was designed to make information free and easy for anyone to access. But as the amount of personal information online has grown, so too have the risks. Last weekend, a nightmare scenario for many privacy advocates arrived. The …

ACLU Called Clearview AI’s Facial Recognition Accuracy Study “Absurd”
buzzfeednews.com · 2020

“Rather than searching for lawmakers against a database of arrest photos, Clearview apparently searched its own shadily-assembled database of photos,” Snow said. “Clearview claim[ed] that images of the lawmakers were present in the company'…

ACLU rejects Clearview AI's facial recognition accuracy claims
engadget.com · 2020

Clearview AI's facial recognition isn't just raising privacy issues -- there are also concerns over its accuracy claims. The ACLU has rejected Clearview's assertion that its technology is "100% accurate" based on the civil liberty group's m…

Clearview AI uses your online photos to instantly ID you. That's a problem, lawsuit says
latimes.com · 2021

Clearview AI has amassed a database of more than 3 billion photos of individuals by scraping sites such as Facebook, Twitter, Google and Venmo. It’s bigger than any other known facial-recognition database in the U.S., including the FBI’s. T…

Clearview AI sued in California by immigrant rights groups, activists
edition.cnn.com · 2021

(CNN Business) — Clearview AI, the controversial firm behind facial-recognition software used by law enforcement, is being sued in California by two immigrants’ rights groups to stop the company’s surveillance technology from proliferating …

Activists and Rights Groups Sue Clearview AI, Warning 'We Won't Be Safe' Until Facial Recognition Firm Is Gone
commondreams.org · 2021

A group of civil liberties advocates and immigrant rights organizations on Tuesday sued Clearview AI in a Northern California court, alleging that the controversial facial recognition company illegally "scraped," or obtained, photos for its…

Immigration Activists Want Clearview AI and Its Facial Recognition Tech Banned in California
legalreader.com · 2021

Immigration activists have filed a lawsuit against Clearview AI, saying the company’s software is still being used by law enforcement even though several California cities have banned the use of facial recognition technologies.

CNN reports …

Facial recog firm Clearview hit with complaints in France, Austria, Italy, Greece and the UK
theregister.com · 2021

Updated Data rights groups have filed complaints in the UK, France, Austria, Greece and Italy against Clearview AI, claiming its scraped and searchable database of biometric profiles breaches both the EU and UK General Data Protection Regul…

French regulator fines US face recognition firm Clearview AI €20 million
lemonde.fr · 2022

France's privacy watchdog slapped a €20 million fine on US firm Clearview AI on Thursday, October 20, for breaching privacy laws, as pressure mounts on the controversial facial-recognition platform.

The firm collects images of faces from we…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.