Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 172

Associated Incidents

Incident 1927 Report
Sexist and Racist Google Adsense Advertisements

Loading...
Can computers be racist? Big data, inequality, and discrimination
fordfoundation.org · 2018

It seems like everyone is talking about the power of big data and how it is helping companies, governments, and organizations make better and more efficient decisions. But rarely do they mention that big data can actually perpetuate and exacerbate existing systems of racism, discrimination, and inequality.

Big data is supposed to make life better. Companies like Netflix use it to recommend movies you might like to watch based on what you’ve previously streamed. There are also broader public applications, such as predicting (and thus more quickly responding to) outbreaks of disease based on online search patterns of symptoms.

The problem with big data is that its application and use is not impartial or unbiased. Harvard professor Latanya Sweeney, who also directs the university’s Data Privacy Lab, conducted a cross-country study of 120,000 Internet search ads and found repeated incidence of racial bias. Specifically, her study looked at Google adword buys made by companies that provide criminal background checks. At the time, the results of the study showed that when a search was performed on a name that was “racially associated” with the black community, the results were much more likely to be accompanied by an ad suggesting that the person had a criminal record—regardless of whether or not they did (see video below). This is just one of many research studies showing similar bias.

If an employer searched the name of a prospective hire, only to be confronted with ads suggesting that the person had a prior arrest, you can imagine how that could affect the applicant’s career prospects.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd