Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 762

Associated Incidents

Incident 434 Report
Racist AI behaviour is not a new problem

Loading...
Racist AI behaviour is not a new problem
natbuckley.co.uk · 2016

Professor Margaret Boden, an AI and cognitive science researcher, took the time to speak to me in 2010 about computers, AI, morality and the future. One of the stories she told me comes back to me every now and then, most recently by Microsoft’s latest failure to anticipate the result of releasing their chat bot, Tay, into the world, only to see it become racist in less than 24 hours.

In the early ‘80s St George’s Medical Hospital School in London decided to automate parts of their admissions workflow. The existing process was time consuming and expensive. Before the applicants were invited for interviews, their initial suitability was assessed based on their grades, classes taken and other criteria. It was that stage that was going to be automated.

A statistical system for weighting those different criteria was devised. The program would select the highest ranking candidates to invite for an interview. After the initial design was complete, the program was trained to adjust the weighting it gave to different criteria based on previous years’ admissions datasets. It was taught to achieve results consistent with the way human staff would select potential students.

The program was used between 1982 and 1986, until in 1986 two members of staff complained to the Commission for Racial Equality. The program had learnt to discriminate against non-white and female applicants. Those with postcodes betraying their working-class background were also given lower priority in the selection process.

The story making the news: The Age, 26th February 1988.

The quote from the spokesperson for the Commission for Racial Equality at the time was damning:

St George’s computer program merely replicated the discrimination that was already being practised by the selectors, and there is no reason to believe that the selectors at St George’s were any more discriminatory than selectors elsewhere.

— The Age, 26th February 1988, (emphasis mine)

That was in the ’80s. Plus ça change.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd