Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6742

Associated Incidents

Incident 13052 Report
UK Facial Recognition System Reportedly Exhibits Higher False Positive Rates for Black and Asian Subjects

Loading...
Facial recognition cameras ‘more likely to wrongly flag black and Asian people’
telegraph.co.uk · 2025

Facial recognition technology has been more likely to incorrectly flag black and Asian people as possible suspects, according to tests.

An investigation into how the technology works when used to search the police national database found it was "more likely to incorrectly include some demographic groups in its search results", the Home Office has now revealed.

Labour is currently preparing a nationwide roll-out of live facial recognition cameras, and an expansion of its use to scan government databases, including passports and immigration records for crime suspects.

Sarah Jones, the policing minister, has described the technology as the biggest breakthrough since DNA matching.

There have been persistent concerns about the potential for "false positives" among ethnic minorities.

But the Home Office said that since the National Physical Laboratory (NPL) findings were shared, it had tested and purchased a new algorithm that had "no statistically significant" bias.

'Concerning inbuilt bias'

Analysts who examined the police national database's retrospective facial recognition technology tool at a lower setting found that "the false positive identification rate (FPIR) for white subjects (0.04 per cent) is lower than that for Asian subjects (4 per cent) and black subjects (5.5 per cent)".

The testing went on to find that the number of false positives for black women was particularly high. "The FPIR for black male subjects (0.4 per cent) is lower than that for black female subjects (9.9 per cent)," the report said.

Police and crime commissioners said publication of the NPL's findings "sheds light on a concerning inbuilt bias" and urged caution over plans for a national expansion.

The Association of Police and Crime Commissioners said the bias "has meant that in some circumstances it is more likely to incorrectly match black and Asian people than their white counterparts".

It added: "The language is technical but behind the detail it seems clear that technology has been deployed into operational policing without adequate safeguards in place."

Officials say the technology is needed to help catch serious offenders. They say there are manual safeguards -- written into police training, operational practice and guidance -- that require all potential matches returned from the police national database to be visually assessed by a trained user and investigating officer.

'We have already taken action'

A Home Office spokesman said: "The Home Office takes the findings of the report seriously and we have already taken action. A new algorithm has been independently tested and procured, which has no statistically significant bias. It will be tested early next year and will be subject to evaluation.

"Given the importance of this issue, we have also asked the police inspectorate, alongside the forensic science regulator, to review law enforcement's use of facial recognition. They will assess the effectiveness of the mitigations, which the National Police Chiefs' Council supports."

The Met Police's annual report showed that out of the 3.1 million images, there were 10 false alerts where the "match" proved to be the wrong person. Eight were black or ethnic minorities, potentially reflecting lingering evidence of racial bias in the technology.

However, in four of those 10 cases, they were discarded by officers before the individual was engaged. In the six other cases, the person was approached by police and spoken to for under five minutes before no further action was taken. The Met points out this is an error rate of just 0.0003 per cent, or one in 300,000.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd