Citation record for Incident 267

Suggested citation format

Lam, Khoa. (2017-06-15) Incident Number 267. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
Editors
267
4
2017-06-15
Khoa Lam

Incidents Reports

Until recently, Hoan Ton-That’s greatest hits included an obscure iPhone game and an app that let people put Donald Trump’s distinctive yellow hair on their own photos.

Then Mr. Ton-That — an Australian techie and onetime model — did something momentous: He invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology. When I began looking into the company in November, its website was a bare page showing a nonexistent Manhattan address as its place of business. The company’s one employee listed on LinkedIn, a sales manager named “John Good,” turned out to be Mr. Ton-That, using a fake name. For a month, people affiliated with the company would not return my emails or phone calls.

While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.

Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

The company eventually started answering my questions, saying that its earlier silence was typical of an early-stage start-up in stealth mode. Mr. Ton-That acknowledged designing a prototype for use with augmented-reality glasses but said the company had no plans to release it. And he said my photo had rung alarm bells because the app “flags possible anomalous search behavior” in order to prevent users from conducting what it deemed “inappropriate searches.”

In addition to Mr. Ton-That, Clearview was founded by Richard Schwartz — who was an aide to Rudolph W. Giuliani when he was mayor of New York — and backed financially by Peter Thiel, a venture capitalist behind Facebook and Palantir.

Another early investor is a small firm called Kirenaga Partners. Its founder, David Scalzo, dismissed concerns about Clearview making the internet searchable by face, saying it’s a valuable crime-solving tool.

“I’ve come to the conclusion that because information constantly increases, there’s never going to be privacy,” Mr. Scalzo said. “Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

Addicted to A.I.

Mr. Ton-That, 31, grew up a long way from Silicon Valley. In his native Australia, he was raised on tales of his royal ancestors in Vietnam. In 2007, he dropped out of college and moved to San Francisco. The iPhone had just arrived, and his goal was to get in early on what he expected would be a vibrant market for social media apps. But his early ventures never gained real traction.

In 2009, Mr. Ton-That created a site that let people share links to videos with all the contacts in their instant messengers. Mr. Ton-That shut it down after it was branded a “phishing scam.” In 2015, he spun up Trump Hair, which added Mr. Trump’s distinctive coif to people in a photo, and a photo-sharing program. Both fizzled.

Dispirited, Mr. Ton-That moved to New York in 2016. Tall and slender, with long black hair, he considered a modeling career, he said, but after one shoot he returned to trying to figure out the next big thing in tech. He started reading academic papers on artificial intelligence, image recognition and machine learning.

Mr. Schwartz and Mr. Ton-That met in 2016 at a book event at the Manhattan Institute, a conservative think tank. Mr. Schwartz, now 61, had amassed an impressive Rolodex working for Mr. Giuliani in the 1990s and serving as the editorial page editor of The New York Daily News in the early 2000s. The two soon decided to go into the facial recognition business together: Mr. Ton-That would build the app, and Mr. Schwartz would use his contacts to drum up commercial interest.

Police departments have had access to facial recognition tools for almost 20 years, but they have historically been limited to searching government-provided images, such as mug shots and driver’s license photos. In recent years, facial recognition algorithms have improved in accuracy, and companies like Amazon offer products that can create a facial recognition program for any database of images.

Mr. Ton-That wanted to go way beyond that. He began in 2016 by recruiting a couple of engineers. One helped design a program that can automatically collect images of people’s faces from across the internet, such as employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram and even Venmo. Representatives of those companies said their policies prohibit such scraping, and Twitter said it explicitly banned use of its data for facial recognition.

Another engineer was hired to perfect a facial recognition algorithm that was derived from academic papers. The result: a system that uses what Mr. Ton-That described as a “state-of-the-art neural net” to convert all the images into mathematical formulas, or vectors, based on facial geometry — like how far apart a person’s eyes are. Clearview created a vast directory that clustered all the photos with similar vectors into “neighborhoods.” When a user uploads a photo of a face into Clearview’s system, it converts the face into a vector and then shows all the scraped photos stored in that vector’s neighborhood — along with the links to the sites from which those images came.

Mr. Schwartz paid for server costs and basic expenses, but the operation was bare bones; everyone worked from home. “I was living on credit card debt,” Mr. Ton-That said. “Plus, I was a Bitcoin believer, so I had some of those.”

Going Viral With Law Enforcement

By the end of 2017, the company had a formidable facial recognition tool, which it called Smartcheckr. But Mr. Schwartz and Mr. Ton-That weren’t sure whom they were going to sell it to.

Maybe it could be used to vet babysitters or as an add-on feature for surveillance cameras. What about a tool for security guards in the lobbies of buildings or to help hotels greet guests by name? “We thought of every idea,” Mr. Ton-That said.

One of the odder pitches, in late 2017, was to Paul Nehlen — an anti-Semite and self-described “pro-white” Republican running for Congress in Wisconsin — to use “unconventional databases” for “extreme opposition research,” according to a document provided to Mr. Nehlen and later posted online. Mr. Ton-That said the company never actually offered such services.

The company soon changed its name to Clearview AI and began marketing to law enforcement. That was when the company got its first round of funding from outside investors: Mr. Thiel and Kirenaga Partners. Among other things, Mr. Thiel was famous for secretly financing Hulk Hogan’s lawsuit that bankrupted the popular website Gawker. Both Mr. Thiel and Mr. Ton-That had been the subject of negative articles by Gawker.

“In 2017, Peter gave a talented young founder $200,000, which two years later converted to equity in Clearview AI,” said Jeremiah Hall, Mr. Thiel’s spokesman. “That was Peter’s only contribution; he is not involved in the company.”

Even after a second funding round in 2019, Clearview remains tiny, having raised $7 million from investors, according to Pitchbook, a website that tracks investments in start-ups. The company declined to confirm the amount.

In February, the Indiana State Police started experimenting with Clearview. They solved a case within 20 minutes of using the app. Two men had gotten into a fight in a park, and it ended when one shot the other in the stomach. A bystander recorded the crime on a phone, so the police had a still of the gunman’s face to run through Clearview’s app.

They immediately got a match: The man appeared in a video that someone had posted on social media, and his name was included in a caption on the video. “He did not have a driver’s license and hadn’t been arrested as an adult, so he wasn’t in government databases,” said Chuck Cohen, an Indiana State Police captain at the time.

The man was arrested and charged; Mr. Cohen said he probably wouldn’t have been identified without the ability to search social media for his face. The Indiana State Police became Clearview’s first paying customer, according to the company. (The police declined to comment beyond saying that they tested Clearview’s app.)

Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That. (“I’m thrilled to have the opportunity to help Hoan build Clearview into a mission-driven organization that’s helping law enforcement protect children and enhance the safety of communities across the country,” Mr. Schwartz said through a spokeswoman.)

The company’s main contact for customers was Jessica Medeiros Garrison, who managed Luther Strange’s Republican campaign for Alabama attorney general. Brandon Fricke, an N.F.L. agent engaged to the Fox Nation host Tomi Lahren, said in a financial disclosure report during a congressional campaign in California that he was a “growth consultant” for the company. (Clearview said that it was a brief, unpaid role, and that the company had enlisted Democrats to help market its product as well.)

The company’s most effective sales technique was offering 30-day free trials to officers, who then encouraged their acquisition departments to sign up and praised the tool to officers from other police departments at conferences and online, according to the company and documents provided by police departments in response to public-record requests. Mr. Ton-That finally had his viral hit.

In July, a detective in Clifton, N.J., urged his captain in an email to buy the software because it was “able to identify a suspect in a matter of seconds.” During the department’s free trial, Clearview had identified shoplifters, an Apple Store thief and a good Samaritan who had punched out a man threatening people with a knife.

Photos “could be covertly taken with telephoto lens and input into the software, without ‘burning’ the surveillance operation,” the detective wrote in the email, provided to The Times by two researchers, Beryl Lipton of MuckRock and Freddy Martinez of Open the Government. They discovered Clearview late last year while looking into how local police departments are using facial recognition.

According to a Clearview sales presentation reviewed by The Times, the app helped identify a range of individuals: a person who was accused of sexually abusing a child whose face appeared in the mirror of someone’s else gym photo; the person behind a string of mailbox thefts in Atlanta; a John Doe found dead on an Alabama sidewalk; and suspects in multiple identity-fraud cases at banks.

In Gainesville, Fla., Detective Sgt. Nick Ferrara heard about Clearview last summer when it advertised on CrimeDex, a list-serv for investigators who specialize in financial crimes. He said he had previously relied solely on a state-provided facial recognition tool, FACES, which draws from more than 30 million Florida mug shots and Department of Motor Vehicle photos.

Sergeant Ferrara found Clearview’s app superior, he said. Its nationwide database of images is much larger, and unlike FACES, Clearview’s algorithm doesn’t require photos of people looking straight at the camera.

“With Clearview, you can use photos that aren’t perfect,” Sergeant Ferrara said. “A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.”

He uploaded his own photo to the system, and it brought up his Venmo page. He ran photos from old, dead-end cases and identified more than 30 suspects. In September, the Gainesville Police Department paid $10,000 for an annual Clearview license.

Federal law enforcement, including the F.B.I. and the Department of Homeland Security, are trying it, as are Canadian law enforcement authorities, according to the company and government officials.

Despite its growing popularity, Clearview avoided public mention until the end of 2019, when Florida prosecutors charged a woman with grand theft after two grills and a vacuum were stolen from an Ace Hardware store in Clermont. She was identified when the police ran a still from a surveillance video through Clearview, which led them to her Facebook page. A tattoo visible in the surveillance video and Facebook photos confirmed her identity, according to an affidavit in the case.

‘We're All Screwed’

Mr. Ton-That said the tool does not always work. Most of the photos in Clearview’s database are taken at eye level. Much of the material that the police upload is from surveillance cameras mounted on ceilings or high on walls.

“They put surveillance cameras too high,” Mr. Ton-That lamented. “The angle is wrong for good face recognition.”

Despite that, the company said, its tool finds matches up to 75 percent of the time. But it is unclear how often the tool delivers false matches, because it has not been tested by an independent party such as the National Institute of Standards and Technology, a federal agency that rates the performance of facial recognition algorithms.

“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”

But current and former law enforcement officials say the app is effective. “For us, the testing was whether it worked or not,” said Mr. Cohen, the former Indiana State Police captain.

One reason that Clearview is catching on is that its service is unique. That’s because Facebook and other social media sites prohibit people from scraping users’ images — Clearview is violating the sites’ terms of service.

“A lot of people are doing it,” Mr. Ton-That shrugged. “Facebook knows.”

Jay Nancarrow, a Facebook spokesman, said the company was reviewing the situation with Clearview and “will take appropriate action if we find they are violating our rules.”

Mr. Thiel, the Clearview investor, sits on Facebook’s board. Mr. Nancarrow declined to comment on Mr. Thiel's personal investments.

Some law enforcement officials said they didn’t realize the photos they uploaded were being sent to and stored on Clearview’s servers. Clearview tries to pre-empt concerns with an F.A.Q. document given to would-be clients that says its customer-support employees won’t look at the photos that the police upload.

Clearview also hired Paul D. Clement, a United States solicitor general under President George W. Bush, to assuage concerns about the app’s legality.

In an August memo that Clearview provided to potential customers, including the Atlanta Police Department and the Pinellas County Sheriff’s Office in Florida, Mr. Clement said law enforcement agencies “do not violate the federal Constitution or relevant existing state biometric and privacy laws when using Clearview for its intended purpose.”

Mr. Clement, now a partner at Kirkland & Ellis, wrote that the authorities don’t have to tell defendants that they were identified via Clearview, as long as it isn’t the sole basis for getting a warrant to arrest them. Mr. Clement did not respond to multiple requests for comment.

The memo appeared to be effective; the Atlanta police and Pinellas County Sheriff’s Office soon started using Clearview.

Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see. After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches. When asked about this, Mr. Ton-That laughed and called it a “software bug.”

“It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math,” said Al Gidari, a privacy professor at Stanford Law School. “Absent a very strong federal privacy law, we’re all screwed.”

Mr. Ton-That said his company used only publicly available images. If you change a privacy setting in Facebook so that search engines can’t link to your profile, your Facebook photos won’t be included in the database, he said.

But if your profile has already been scraped, it is too late. The company keeps all the images it has scraped even if they are later deleted or taken down, though Mr. Ton-That said the company was working on a tool that would let people request that images be removed if they had been taken down from the website of origin.

Woodrow Hartzog, a professor of law and computer science at Northeastern University in Boston, sees Clearview as the latest proof that facial recognition should be banned in the United States.

“We’ve relied on industry efforts to self-police and not embrace such a risky technology, but now those dams are breaking because there is so much money on the table,” Mr. Hartzog said. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”

Where Everybody Knows Your Name

During a recent interview at Clearview’s offices in a WeWork location in Manhattan’s Chelsea neighborhood, Mr. Ton-That demonstrated the app on himself. He took a selfie and uploaded it. The app pulled up 23 photos of him. In one, he is shirtless and lighting a cigarette while covered in what looks like blood.

Mr. Ton-That then took my photo with the app. The “software bug” had been fixed, and now my photo returned numerous results, dating back a decade, including photos of myself that I had never seen before. When I used my hand to cover my nose and the bottom of my face, the app still returned seven correct matches for me.

Police officers and Clearview’s investors predict that its app will eventually be available to the public.

Mr. Ton-That said he was reluctant. “There’s always going to be a community of bad people who will misuse it,” he said.

Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.

Asked about the implications of bringing such a power into the world, Mr. Ton-That seemed taken aback.

“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”

The Secretive Company That Might End Privacy as We Know It

The internet was designed to make information free and easy for anyone to access. But as the amount of personal information online has grown, so too have the risks. Last weekend, a nightmare scenario for many privacy advocates arrived. The New York Times revealed Clearview AI, a secretive surveillance company, was selling a facial recognition tool to law enforcement powered by “three billion images” culled from the open web. Cops have long had access to similar technology, but what makes Clearview different is where it obtained its data. The company scraped pictures from millions of public sites including Facebook, YouTube, and Venmo, according to the Times.

To use the tool, cops simply upload an image of a suspect, and Clearview spits back photos of them and links to where they were posted. The company has made it easy to instantly connect a person to their online footprint—the very capability many people have long feared someone would possess. (Clearview’s claims should be taken with a grain of salt; a Buzzfeed News investigation found its marketing materials appear to contain exaggerations and lies. The company did not immediately return a request for comment.)

Like almost any tool, scraping can be used for noble or nefarious purposes. Without it, we wouldn’t have the Internet Archive’s invaluable WayBack Machine, for instance. But it’s also how Stanford researchers a few years ago built a widely condemned “gaydar,” an algorithm they claimed could detect a person’s sexuality by looking at their face. “It’s a fundamental thing that we rely on every day, a lot of people without realizing, because it’s going on behind the scenes,” says Jamie Lee Williams, a staff attorney at the Electronic Frontier Foundation on the civil liberties team. The EFF and other digital rights groups have often argued the benefits of scraping outweigh the harms.

Automated scraping violates the policies of sites like Facebook and Twitter, the latter of which specifically prohibits scraping to build facial recognition databases. Twitter sent a letter to Clearview this week asking it to stop pilfering data from the site “for any reason,” and Facebook is also reportedly examining the matter, according to the Times. But it’s unclear whether they have any legal recourse in the current system.

To fight back against scraping, companies have often used the Computer Fraud and Abuse Act, claiming the practice amounts to accessing a computer without proper authorization. Last year, however, the Ninth Circuit Court of Appeals ruled that automated scraping doesn’t violate the CFAA. In that case, LinkedIn sued and lost against a company called HiQ, which scraped public LinkedIn profiles in bulk and combined them with other information into a database for employers. The EFF and other groups heralded the ruling as a victory, because it limited the scope of the CFAA—which they argue has frequently been abused by companies—and helped protect researchers who break terms of service agreements in the name of freedom of information.

The CFFA is one of few options available to companies who want to stop scrapers, which is part of the problem. “It’s a 1986, pre-internet statute,” says WIlliams. “If that’s the best we can do to protect our privacy with these very complicated, very modern problems, then I think we’re screwed.”

Civil liberties groups and technology companies both have been calling for a federal law that would establish Americans’ right to privacy in the digital era. Clearview, and companies like it, make the matter that much more urgent. “We need a comprehensive privacy statute that covers biometric data,” says Williams.

Right now, there’s only a patchwork of state regulations that potentially provide those kinds of protections. The California Consumer Privacy Act, which went into effect this month, gives state residents the right to ask companies like Clearview to delete data it collects about them. Other regulations, like the Illinois Biometric Information Privacy Act, require corporations to obtain consent before collecting biometric data, including faces. A class action lawsuit filed earlier this week accuses Clearview of violating that law. Texas and Washington have similar regulations on the books, but don’t allow for private lawsuits; California’s law also doesn’t allow for private right of action.

Some experts argue that empowering consumers is not enough. “We just can’t be expected to manage every use of our data online,” says Dylan Gilbert, a privacy lawyer at the civil liberties group Public Knowledge. He argues the solution instead is to make some uses of personal data illegal. For example, some cities, including San Francisco, have banned facial recognition by city agencies all together.

Another option is to give some power to organizations, rather than only individuals. “Companies and platforms like LinkedIn or Facebook or Twitter should have the right to protect their users’ privacy downstream,” says Tiffany C. Li, a technology lawyer and visiting professor at Boston University School of Law. A federal law could allow online platforms to sue entities like Clearview on behalf of their users to protect their right to privacy. The risk, though, is that corporations will pursue litigation that mostly serves their own interests. A 2018 article in Boston University’s Journal of Science & Technology Law found that in 20 years of scraping cases based on the CFAA, “a tremendous number” concerned claims brought by “by direct commercial competitors or companies in closely adjacent markets to each other.”

In the absence of legal recourse, one way companies have blocked people from scraping their sites is by using technical tools. Facebook has been particularly aggressive in this regard. It requires users to sign in to view almost anything on its site, and it uses a lengthy robots.txt file to stop Google from indexing many of its pages. That’s why if you Google your name, all of your Facebook activity likely isn’t in the search results. But not all of the social network’s efforts have been popular. Last year, the company blocked third-party transparency tools used by nonprofits and journalists, because it said it needed to prevent malicious actors from scraping its site.

Not all companies have the resources, or priorities, to create those kinds of barriers against any would-be scrapers. Venmo, the payment app owned by PayPal, has repeatedly been criticized for making all transactions public by default. Several researchers and artists have scraped millions of payments from Venmo to demonstrate how it puts people’s privacy at risk. Clearview says it also mined the site for its database. “Scraping Venmo is a violation of our terms of service and we actively work to limit and block activity that violates these policies,” a spokesperson said in a statement. While the app, and others like it, could do more to protect users, catching malicious scraping will always be an evolving cat-and-mouse game, and regulatory action could be more effective to stop it.

“We don’t want to limit access to information and we don’t want to ban web scraping,” Li says. “But we need to think about other ways to prevent some of the privacy harms we saw with Clearview.”

Scraping the Web Is a Powerful Tool. Clearview AI Abused It

“Rather than searching for lawmakers against a database of arrest photos, Clearview apparently searched its own shadily-assembled database of photos,” Snow said. “Clearview claim[ed] that images of the lawmakers were present in the company's massive repository of face scans. But what happens when police search for a person whose photo isn't in the database? How often will the system return a false match? Are the rates of error worse for people of color?”

It’s also atypical for an officer investigating a case to have a clear headshot of a suspect, like the ones Clearview presumably used as inputs for the lawmakers. Clearview’s tool is intended to be used in real-world situations, where photo quality, lighting, and other factors can skew the process, and should be tested as such, said Snow.

“If Clearview is so confident about its technology, it should subject its product to rigorous independent testing in real-life settings,” he said. “And it should give the public the right to decide whether the government is permitted to use its product at all.”

Ton-That said that the independent panel’s test was already diligent and thorough. “The Clearview test ran the same photos as the ACLU did, but against a database that was over 100,000 times larger,” he said, noting that in addition to searching for 535 US congressional leaders, his company’s technology was tried on state legislators from Texas and California. “With that higher level of difficulty, Clearview scored 100% following the ACLU standard.”

Facial recognition researchers expressed serious doubts about Clearview’s report. While the ACLU study was effective in demonstrating a facial recognition software’s deficiencies, it is by no means a sufficient methodology for definitively assessing the accuracy of a commercial tool like Clearview AI, Liz O’Sullivan, the technology director at the Surveillance Technology Oversight Project, told BuzzFeed News. O’Sullivan also questioned the panel’s claim that Clearview’s tech is accurate for “all demographic groups,” given the study’s test group of 834 state and federal lawmakers is not representative of all people or ethnicities.

Clearview CEO Ton-That disagreed with this assessment. “The rigors of the test have covered every demographic group that is represented in the general population and have shown Clearview’s accuracy when searching out of billions of photos,” he said.

Facial recognition and privacy researcher Adam Harvey told BuzzFeed News that it is impossible to evaluate the accuracy of the Clearview study without more insight into how it was conducted. “This document does not provide sufficient information to validate their claim,” he said. “It appears that no one in the panel has any prior experience with face recognition.”

Clearview’s report was signed off by a three-person panel, which included Jonathan Lippman, chief judge of the New York Court of Appeals from 2009 to 2015; Dr. Nicholas Cassimatis, an artificial intelligence academic and entrepreneur; and Aaron Renn, an urban analyst and former senior fellow at the Manhattan Institute, a conservative think tank. The panel determined whether the two top-ranked matches from Clearview search results showed the same person in the original search.

“In October 2019, the undersigned Panel conducted an independent accuracy test of Clearview AI, a new image-matching technology that functions as an Internet search engine for faces,” the report read.

None of the panelists appear to have any expertise in facial recognition. Lippman told BuzzFeed News that he "was introduced to Clearview by Richard Schwartz," one of the company’s cofounders whom he’s known since Schwartz’s time as editorial page editor of the New York Daily News. He said he was not paid for his work on the study.

“I assume I was approached because of my experience as a judge in government and criminal justice, and in looking at and weighing empirical evidence,” Lippman said.

Cassimatis told BuzzFeed News that he had worked in artificial intelligence for 20 years, and pointed to his work as a professor at Rensselaer Polytechnic Institute and as the former “head of Samsung's North American AI research.” He said he met Ton-That through a mutual friend and was selected “for my expertise in this field.” Cassimatis said he was not paid to work on the study.

Renn did not reply to a request for comment. Previously, Ton-That has said he and Schwartz had met during an event at the Manhattan Institute, where Renn served as a senior fellow until last year.

Clearview’s “ACLU” study isn’t the first time the company has touted the accuracy of its technology without much in the way of supporting materials or peer review. Last summer, the company told the Atlanta Police Department in marketing materials that its technology was 98.6% accurate in a test of 1 million faces — an accuracy rate higher than that of tools created by Google and Chinese tech giant Tencent. However, that claim, which was made using the University of Washington’s MegaFace facial recognition benchmark, was never independently verified by the university or a third party, the company later told BuzzFeed News. Clearview declined a request to make the results of this test available for review.

Since October, however, the company seems to have moved away from marketing the MegaFace number and gone forward with the 100% accuracy rating from the ACLU-based test done by its three-person panel. After the first news stories were published about the company last month, the company added a new section to its once sparse website called “Clearview Facts,” where it said that “an independent panel of experts rated Clearview 100% accurate across all demographic groups according to the ACLU’s facial recognition accuracy methodology.”

That did not sit well with the ACLU, which eventually lodged a complaint. On Jan. 28, Clearview removed the civil rights group’s name from its site, though it still claims that “an independent panel of experts reviewed and certified Clearview for accuracy and reliability.”

That may not be enough for the ACLU and Snow, who said that any proof of accuracy was “beside the point.”

“Clearview's technology gives government the unprecedented power to spy on us wherever we go — tracking our faces at protests, [Alcoholics Anonymous] meetings, church, and more,” he said. “Accurate or not, Clearview's technology in law enforcement hands will end privacy as we know it.”

ACLU Called Clearview AI’s Facial Recognition Accuracy Study “Absurd”

Clearview AI's facial recognition isn't just raising privacy issues -- there are also concerns over its accuracy claims. The ACLU has rejected Clearview's assertion that its technology is "100% accurate" based on the civil liberty group's methodology for testing Amazon's Rekognition, telling BuzzFeed News that the findings are "absurd on many levels" and amounted to "manufacturing endorsements." According to the ACLU, there are key differences that make Clearview's sales pitch misleading.

For one, Clearview was searching nearly 3 billion public photos that included the politicians whose faces were part of the test, not arrest photos like the ACLU used. It didn't account for what would happen if someone wasn't in the database. Would it generate false positives, and would there be any bias in those false hits? On top of this, Clearview had the luxury of the clear, sometimes formal photos that often appear online. Its tool is supposed to be used in the real world, where lighting and image quality are frequently suboptimal -- it's not certain how well the facial recognition fares with grainy security camera footage.

The Surveillance Technology Oversight Project's Liz O'Sullivan also doubted Clearview's insistence that the accuracy applied to "all demographic groups," noting that 834 politicians wouldn't be representative of every ethnicity. Moreover, many of the people in the independent study panel didn't have direct proficiency with facial recognition, although one was the former head of Samsung's North American AI research.

Not surprisingly, Clearview chief Hoan Ton-That maintained that the results were acceptable. He insisted that Clearview used the ACLU's same methods, and that there was a "higher level of difficulty" as it used faces of politicians from California and Texas. He also argued that the test had looked at "every demographic group." Ton-That didn't really address the ACLU's criticisms, though, and Clearview eventually responded to an ACLU complaint by removing the group's name from the site. The company's accuracy hasn't been re-checked, then, and that's concerning when police across the US are relying on the technology to pinpoint suspects.

ACLU rejects Clearview AI's facial recognition accuracy claims