Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2092

関連インシデント

インシデント 3358 Report
UK Visa Streamline Algorithm Allegedly Discriminated Based on Nationality

Loading...
Home Office says it will abandon its racist visa algorithm - after we sued them
foxglove.org.uk · 2020

Home Office lawyers wrote to us yesterday, to respond to the legal challenge which we’ve been working on with the Joint Council for the Welfare of Immigrants (JCWI). 

We were asking the Court to declare the streaming algorithm unlawful, and to order a halt to its use to assess visa applications.

Before the case could be heard, the Home Office caved in. They’ve agreed that from this Friday, August 7, they will get rid of the ‘streaming algorithm.’ 

Home Secretary Priti Patel has pledged a full review of the system, including for issues of ‘unconscious bias’ and discrimination.

This marks the end of a computer system which had been used for years to process every visa application to the UK. It’s great news, because the algorithm entrenched racism and bias into the visa system.

The Home Office kept a secret list of suspect nationalities automatically given a ‘red’ traffic-light risk score – people of these nationalities were likely to be denied a visa. It had got so bad that academic and nonprofit organisations told us they no longer even tried to have colleagues from certain countries visit the UK to work with them.

We also discovered that the algorithm suffered from “feedback loop” problems known to plague many such automated systems – where past bias and discrimination, fed into a computer program, reinforce future bias and discrimination. Researchers documented this issue with predictive policing systems in the US and we realised the same problem had crept in here.

It’s also great news because this was the first successful judicial review of a UK government algorithmic decision-making system.

More and more government departments are talking up the potential for using machine learning and artificial intelligence to aid decisions. Make no mistake: this is where government is heading, from your local council right on up to Number 10.

But at the moment there’s an alarming lack of transparency about where these tools are being used and an even more alarming lack of safeguards to prevent biased and unfair software ruining people’s lives.

There’s been some discussion around correcting for biased algorithms but nowhere near enough debate about giving the public a say in whether they want government by algorithm in the first place. At Foxglove, we believe in democracy – not opaque and unaccountable technocracy.

Foxglove exists to challenge such abuses of technology. It’s a safe bet that this won’t be the last time we’ll need to challenge a government algorithm in the courts.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd