Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3971

Associated Incidents

Incident 7385 Report
Department for Work and Pensions (DWP) Algorithm Wrongly Flags 200,000 for Housing Benefit Fraud

Loading...
DWP wrongly suspects hundreds of thousands of benefits claimants of fraud
walesonline.co.uk · 2024

More than 200,000 people have been wrongly investigated for housing benefit fraud and error. Over the last three years two-thirds of claims flagged as potentially high risk by a Department for Work and Pensions (DWP) automated system were actually legitimate.

The system flaw was revealed by official figures released under freedom of information laws, obtained by Big Brother Watch - a civil liberties and privacy campaign group. They show how thousands of UK households have had their housing benefit claims unnecessarily investigated each month because of a faulty algorithm judgment that wrongly identified their claims as high risk, The Guardian reported.

As a result, around £4.4m was spent on officials carrying out checks, which did not save any money. Turn2us, a charity that supports people who rely on benefits, said the figures showed it was time for the government to "work closely with actual users so that automation works for people rather than against them". For money-saving tips, sign up to our Money newsletter here

According to The Guardian, the risk that a claim could be wrong or fraudulent is determined by the claimant's personal characteristics including age, gender, number of children and tenancy agreement. Once a potentially fraudulent claim is flagged by the automated tool - which does not use artificial intelligence - council staff must review and validate whether the details are correct, which involves seeking evidence from claimants.

Susannah Copson, a legal and policy officer at Big Brother Watch, said: "This is yet another example of DWP focusing on the prospect of algorithm-led fraud detection that seriously underperforms in practice. In reality, DWP's overreliance on new technologies puts the rights of people who are often already disadvantaged, marginalised and vulnerable in the backseat."

She warned of "a real danger that DWP repeats this pattern of bold claims and poor performance with future data-grabbing tools". The DWP told The Guardian it was unable to comment during the pre-election period. Labour, which could be in charge of the system in less than two weeks time, was also approached for comment.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd