Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2092

Associated Incidents

Incident 3358 Report
UK Visa Streamline Algorithm Allegedly Discriminated Based on Nationality

Loading...
Home Office says it will abandon its racist visa algorithm - after we sued them
foxglove.org.uk · 2020

Home Office lawyers wrote to us yesterday, to respond to the legal challenge which we’ve been working on with the Joint Council for the Welfare of Immigrants (JCWI). 

We were asking the Court to declare the streaming algorithm unlawful, and to order a halt to its use to assess visa applications.

Before the case could be heard, the Home Office caved in. They’ve agreed that from this Friday, August 7, they will get rid of the ‘streaming algorithm.’ 

Home Secretary Priti Patel has pledged a full review of the system, including for issues of ‘unconscious bias’ and discrimination.

This marks the end of a computer system which had been used for years to process every visa application to the UK. It’s great news, because the algorithm entrenched racism and bias into the visa system.

The Home Office kept a secret list of suspect nationalities automatically given a ‘red’ traffic-light risk score – people of these nationalities were likely to be denied a visa. It had got so bad that academic and nonprofit organisations told us they no longer even tried to have colleagues from certain countries visit the UK to work with them.

We also discovered that the algorithm suffered from “feedback loop” problems known to plague many such automated systems – where past bias and discrimination, fed into a computer program, reinforce future bias and discrimination. Researchers documented this issue with predictive policing systems in the US and we realised the same problem had crept in here.

It’s also great news because this was the first successful judicial review of a UK government algorithmic decision-making system.

More and more government departments are talking up the potential for using machine learning and artificial intelligence to aid decisions. Make no mistake: this is where government is heading, from your local council right on up to Number 10.

But at the moment there’s an alarming lack of transparency about where these tools are being used and an even more alarming lack of safeguards to prevent biased and unfair software ruining people’s lives.

There’s been some discussion around correcting for biased algorithms but nowhere near enough debate about giving the public a say in whether they want government by algorithm in the first place. At Foxglove, we believe in democracy – not opaque and unaccountable technocracy.

Foxglove exists to challenge such abuses of technology. It’s a safe bet that this won’t be the last time we’ll need to challenge a government algorithm in the courts.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd