Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 608

Associated Incidents

Incident 3734 Report
Amazon’s Experimental Hiring Tool Allegedly Displayed Gender Bias in Candidate Rankings

Loading...
Amazon Accidentally Created A 'Sexist' Recruitment Tool, Then Shut It Down
aplus.com · 2018

Machine learning technology is becoming increasingly common across various industries, from policing to recruiting. But reports have shown that many of these systems have long-standing problems regarding discrimination. To avoid amplifying bias, companies need to actively teach their technology to be inclusive.

There are several ways corporations can improve upon their machine learning tools. Quartz suggests assessing the wider impacts of new AI systems before implementation, as well as establishing internal codes of conduct and incentive models to enhance adherence to non-discriminatory practices. The publication also states that inclusivity and diversity should be made priorities early on, starting from the development of the design teams through the final product.

It's also important for companies to be transparent about the impact of their technology and to constantly evaluate its effectiveness, from refining algorithms to evaluating and reporting its behavior. By taking these proactive steps, there's potential for forward-thinking companies to create revolutionary AI systems without posing a risk to human rights.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd