Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3410

関連インシデント

インシデント 61113 Report
UK Government AI Allegedly Targets Disproportionate Numbers of Certain Nationals for Fraud Review

Loading...
DWP Widening Use of Anti-fraud Tech Amid Concerns From Campaign Groups
digit.fyi · 2023

In a recent National Audit Office (NAO) report concerning the Department for Work and Pensions (DWP) financial accounts, it has come to light that the DWP is expanding its use of machine learning for identifying potential benefits fraud.

Since 2021, the DWP has used a machine learning model to flag potentially fraudulent claims for University Credit (UC) advances. The model was created by training an algorithm using fraud referrals and historic claimant data, and it makes predictions about which new benefits claims could be fraudulent or contain errors.

When a claim scores above a certain threshold, it gets referred to a caseworker for review. The caseworker then performs a manual review of the claim.

Responding in 2021, Big Brother Watch, the civil liberties and privacy campaigning organisation, said that “Leaving a computer to decide whose benefits application needs to be reviewed is an invasion of privacy and opens the door for unfairness and discrimination in the welfare system.”

The NAO report published last Thursday highlighted that the DWP is set to invest around £70 million between the 2022-23 and 2024-25 financial years into “advanced analytics” in a move to deepen its anti-fraud technological capabilities.

Further, it underscored that since last year, similar machine learning models have been designed and piloted to prevent fraud in four “key” risk areas of Universal Credit — people living together, self-employment, capital, and housing.

Despite the DWP expecting “advanced analytics” to help it generate savings of £1.6 billion by 2031, the report states that there is an “inherent risk” that the algorithms which flag benefits claims for review could be biassed “due to unforeseen bias in the input data or the design of the model itself.”

While the report says that the DWP has “tight governance and control” of its machine learning and has put safeguards in place, the DWP’s ability to test for unfair impacts across protected characteristics is “currently limited.” This is due to claimants not always answering the optional demographics-focused questions when making a benefits claim.

Alison Garnham, the Chief Executive of the Child Poverty Action Group said that “Expanding the technology while ignoring calls for transparency and rigorous monitoring of and protections against bias will risk serious harm to vulnerable families,” in a comment made to the BBC.

In spite of the “challenge in balancing transparency over how it uses machine learning to provide public confidence in the benefit system with protecting its capabilities by not tipping off fraudsters about how it tackles fraud,” the report suggests that the DWP “should be able to provide assurance that it is not unfairly treating any group of customers.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd