Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3179

Associated Incidents

Incident 5293 Report
Stable Diffusion Exhibited Biases for Prompts Featuring Professions

Tweet: @Leonardonclt
twitter.com · 2023

🚨 Generative AI has a serious problem with bias 🚨 Over months of reporting, @dinabass and I looked at thousands of images from @StableDiffusion and found that text-to-image AI takes gender and racial stereotypes to extremes worse than in the real world.

We asked Stable Diffusion, perhaps the biggest open-source platform for AI-generated images, to create thousands of images of workers for 14 jobs and 3 categories related to crime and analyzed the results.

What we found was a pattern of racial and gender bias. Women and people with darker skin tones were underrepresented across images of high-paying jobs, and overrepresented for low-paying ones.

But the artificial intelligence model doesn't just replicate stereotypes or disparities that exist in the real world — it amplifies them to alarming lengths.

For example — while 34% of US judges are women, only 3% of the images generated for the keyword "judge" were perceived women. For fast-food workers, the model generated people with darker skin 70% of the time, even though 70% of fast-food workers in the US are White.

We also investigated bias related to who commits crimes and who doesn't. Things got a lot worse.

For every image of a lighter-skinned person generated with the keyword "inmate," the model produced five images of darker-skinned people — even though less than half of US prison inmates are people of color.

For the keyword "terrorist", Stable Diffusion generated almost exclusively subjects with dark facial hair often wearing religious head coverings.

Our results echo the work of experts in the field of algorithmic bias, such as @SashaMTL, @Abebab, @timnitGebru, and @jovialjoy, who have been warning us that the biggest threats from AI are not human extinction but the potential for widening inequalities.

Stable Diffusion is working on an initiative to develop open-source models that will be trained on datasets specific to different countries and cultures in order to mitigate the problem. But given the pace of AI adoption, will these improved models come out soon enough?

AI systems, like facial-recognition, are also already being used by thousands of US police departments. Bias within those tools has led to wrongful arrests. Experts warn that the use of generative AI within policing could exacerbate the issue.

The popularity of generative AI like Stable Diffusion also means that AI-generated images potentially depicting stereotypes about race and gender are posted online every day. And those images are getting increasingly difficult to distinguish from real photographs.

This was a huge effort across @business departments @BBGVisualData @technology@BBGEquality, with edits from @ChloeWhiteaker, Jillian Ward, and help from @itskelseybutler @rachaeldottle @kyleykim @DeniseDSLu @mariepastora @pogkas @raeedahwahid @brittharr @_jsdiamond @DavidIngold

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf