Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3971

関連インシデント

インシデント 7385 Report
Department for Work and Pensions (DWP) Algorithm Wrongly Flags 200,000 for Housing Benefit Fraud

Loading...
DWP wrongly suspects hundreds of thousands of benefits claimants of fraud
walesonline.co.uk · 2024

More than 200,000 people have been wrongly investigated for housing benefit fraud and error. Over the last three years two-thirds of claims flagged as potentially high risk by a Department for Work and Pensions (DWP) automated system were actually legitimate.

The system flaw was revealed by official figures released under freedom of information laws, obtained by Big Brother Watch - a civil liberties and privacy campaign group. They show how thousands of UK households have had their housing benefit claims unnecessarily investigated each month because of a faulty algorithm judgment that wrongly identified their claims as high risk, The Guardian reported.

As a result, around £4.4m was spent on officials carrying out checks, which did not save any money. Turn2us, a charity that supports people who rely on benefits, said the figures showed it was time for the government to "work closely with actual users so that automation works for people rather than against them". For money-saving tips, sign up to our Money newsletter here

According to The Guardian, the risk that a claim could be wrong or fraudulent is determined by the claimant's personal characteristics including age, gender, number of children and tenancy agreement. Once a potentially fraudulent claim is flagged by the automated tool - which does not use artificial intelligence - council staff must review and validate whether the details are correct, which involves seeking evidence from claimants.

Susannah Copson, a legal and policy officer at Big Brother Watch, said: "This is yet another example of DWP focusing on the prospect of algorithm-led fraud detection that seriously underperforms in practice. In reality, DWP's overreliance on new technologies puts the rights of people who are often already disadvantaged, marginalised and vulnerable in the backseat."

She warned of "a real danger that DWP repeats this pattern of bold claims and poor performance with future data-grabbing tools". The DWP told The Guardian it was unable to comment during the pre-election period. Labour, which could be in charge of the system in less than two weeks time, was also approached for comment.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd