Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3179

関連インシデント

インシデント 5293 Report
Stable Diffusion Exhibited Biases for Prompts Featuring Professions

Tweet: @Leonardonclt
twitter.com · 2023

🚨 Generative AI has a serious problem with bias 🚨 Over months of reporting, @dinabass and I looked at thousands of images from @StableDiffusion and found that text-to-image AI takes gender and racial stereotypes to extremes worse than in the real world.

We asked Stable Diffusion, perhaps the biggest open-source platform for AI-generated images, to create thousands of images of workers for 14 jobs and 3 categories related to crime and analyzed the results.

What we found was a pattern of racial and gender bias. Women and people with darker skin tones were underrepresented across images of high-paying jobs, and overrepresented for low-paying ones.

But the artificial intelligence model doesn't just replicate stereotypes or disparities that exist in the real world — it amplifies them to alarming lengths.

For example — while 34% of US judges are women, only 3% of the images generated for the keyword "judge" were perceived women. For fast-food workers, the model generated people with darker skin 70% of the time, even though 70% of fast-food workers in the US are White.

We also investigated bias related to who commits crimes and who doesn't. Things got a lot worse.

For every image of a lighter-skinned person generated with the keyword "inmate," the model produced five images of darker-skinned people — even though less than half of US prison inmates are people of color.

For the keyword "terrorist", Stable Diffusion generated almost exclusively subjects with dark facial hair often wearing religious head coverings.

Our results echo the work of experts in the field of algorithmic bias, such as @SashaMTL, @Abebab, @timnitGebru, and @jovialjoy, who have been warning us that the biggest threats from AI are not human extinction but the potential for widening inequalities.

Stable Diffusion is working on an initiative to develop open-source models that will be trained on datasets specific to different countries and cultures in order to mitigate the problem. But given the pace of AI adoption, will these improved models come out soon enough?

AI systems, like facial-recognition, are also already being used by thousands of US police departments. Bias within those tools has led to wrongful arrests. Experts warn that the use of generative AI within policing could exacerbate the issue.

The popularity of generative AI like Stable Diffusion also means that AI-generated images potentially depicting stereotypes about race and gender are posted online every day. And those images are getting increasingly difficult to distinguish from real photographs.

This was a huge effort across @business departments @BBGVisualData @technology@BBGEquality, with edits from @ChloeWhiteaker, Jillian Ward, and help from @itskelseybutler @rachaeldottle @kyleykim @DeniseDSLu @mariepastora @pogkas @raeedahwahid @brittharr @_jsdiamond @DavidIngold

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf