Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3971

Incidentes Asociados

Incidente 7385 Reportes
Department for Work and Pensions (DWP) Algorithm Wrongly Flags 200,000 for Housing Benefit Fraud

Loading...
DWP wrongly suspects hundreds of thousands of benefits claimants of fraud
walesonline.co.uk · 2024

More than 200,000 people have been wrongly investigated for housing benefit fraud and error. Over the last three years two-thirds of claims flagged as potentially high risk by a Department for Work and Pensions (DWP) automated system were actually legitimate.

The system flaw was revealed by official figures released under freedom of information laws, obtained by Big Brother Watch - a civil liberties and privacy campaign group. They show how thousands of UK households have had their housing benefit claims unnecessarily investigated each month because of a faulty algorithm judgment that wrongly identified their claims as high risk, The Guardian reported.

As a result, around £4.4m was spent on officials carrying out checks, which did not save any money. Turn2us, a charity that supports people who rely on benefits, said the figures showed it was time for the government to "work closely with actual users so that automation works for people rather than against them". For money-saving tips, sign up to our Money newsletter here

According to The Guardian, the risk that a claim could be wrong or fraudulent is determined by the claimant's personal characteristics including age, gender, number of children and tenancy agreement. Once a potentially fraudulent claim is flagged by the automated tool - which does not use artificial intelligence - council staff must review and validate whether the details are correct, which involves seeking evidence from claimants.

Susannah Copson, a legal and policy officer at Big Brother Watch, said: "This is yet another example of DWP focusing on the prospect of algorithm-led fraud detection that seriously underperforms in practice. In reality, DWP's overreliance on new technologies puts the rights of people who are often already disadvantaged, marginalised and vulnerable in the backseat."

She warned of "a real danger that DWP repeats this pattern of bold claims and poor performance with future data-grabbing tools". The DWP told The Guardian it was unable to comment during the pre-election period. Labour, which could be in charge of the system in less than two weeks time, was also approached for comment.

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd