Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3401

Associated Incidents

Incident 61113 Report
UK Government AI Allegedly Targets Disproportionate Numbers of Certain Nationals for Fraud Review

Loading...
AI employed for crucial decisions in over 8 Whitehall departments
interestingengineering.com · 2023

In the past, artificial intelligence (AI) use in public services has caused uproars. For instance, in the Netherlands, tax authorities used technology to find fraud but made many mistakes. This resulted in a hefty fine of €3.7m, pushing thousands of families into poverty.

A recent inquiry by The Guardian unveiled that government officials and civil servants in a minimum of eight Whitehall departments and some police forces in the United Kingdom employed AI to make significant determinations regarding benefit allocations.

They used AI and complex algorithms to make decisions on welfare, immigration, and criminal justice matters. The tools are used to determine benefits, approve marriage licenses, identify potential fraud, and flag fake marriages, among other tasks.

AI racially discriminated

The Guardian demonstrated particular tools, such as an algorithm from the Department for Work and Pensions, inaccurately cut benefits for many. Furthermore, the Metropolitan Police's facial recognition software exhibited racial bias, favoring white faces over black ones in specific conditions.

The Home Office's algorithm designed to detect fake marriages has disproportionately targeted individuals from specific nationalities.

Usually, AI comprehends big sets of existing data. However, its creators may not fully grasp how it processes the information. If the source of the data it learns from is biased, AI might make biased decisions, experts cautioned.

Shameem Ahmad, the chief executive of the Public Law Project favoring the technology, stated that AI has tremendous potential for social good. 

"For instance, we can make things more efficient. But we cannot ignore the serious risks," Ahmad added. "Without urgent action, we could sleep-walk into a situation where opaque automated systems are regularly, possibly unlawfully, used in life-altering ways, and where people will not be able to seek redress when those processes go wrong."

Tech identifying fake marriages

The Home Office said that it employed AI in e-gates at airports for passport scanning, aiding passport applications, and in their "sham marriage triage tool" to identify potential fake marriages for further scrutiny.

However, The Guardian investigation found that the tool disproportionately flags up people from Albania, Greece, Romania, and Bulgaria.

Also, the Department of Work and Pensions (DWP) operates an "integrated risk and intelligence service" that employs an algorithm to identify fraud and errors in benefits claims. 

According to Labour MP Kate Osamor, this algorithm might have led to a myriad of Bulgarians having their benefits wrongly suspended and falsely accused of potential fraud in recent years.

The DWP emphasized that the algorithm doesn't consider nationality in its calculations. A spokesperson further stated: 

"We are cracking down on those who try to exploit the system and shamelessly steal from those most in need as we continue our drive to save the taxpayer £1.3bn next year."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd