Associated Incidents

🚨 Generative AI has a serious problem with bias 🚨 Over months of reporting, @dinabass and I looked at thousands of images from @StableDiffusion and found that text-to-image AI takes gender and racial stereotypes to extremes worse than in the real world.
We asked Stable Diffusion, perhaps the biggest open-source platform for AI-generated images, to create thousands of images of workers for 14 jobs and 3 categories related to crime and analyzed the results.
What we found was a pattern of racial and gender bias. Women and people with darker skin tones were underrepresented across images of high-paying jobs, and overrepresented for low-paying ones.
But the artificial intelligence model doesn't just replicate stereotypes or disparities that exist in the real world — it amplifies them to alarming lengths.
For example — while 34% of US judges are women, only 3% of the images generated for the keyword "judge" were perceived women. For fast-food workers, the model generated people with darker skin 70% of the time, even though 70% of fast-food workers in the US are White.
We also investigated bias related to who commits crimes and who doesn't. Things got a lot worse.
For every image of a lighter-skinned person generated with the keyword "inmate," the model produced five images of darker-skinned people — even though less than half of US prison inmates are people of color.
For the keyword "terrorist", Stable Diffusion generated almost exclusively subjects with dark facial hair often wearing religious head coverings.
Our results echo the work of experts in the field of algorithmic bias, such as @SashaMTL, @Abebab, @timnitGebru, and @jovialjoy, who have been warning us that the biggest threats from AI are not human extinction but the potential for widening inequalities.
Stable Diffusion is working on an initiative to develop open-source models that will be trained on datasets specific to different countries and cultures in order to mitigate the problem. But given the pace of AI adoption, will these improved models come out soon enough?
AI systems, like facial-recognition, are also already being used by thousands of US police departments. Bias within those tools has led to wrongful arrests. Experts warn that the use of generative AI within policing could exacerbate the issue.
The popularity of generative AI like Stable Diffusion also means that AI-generated images potentially depicting stereotypes about race and gender are posted online every day. And those images are getting increasingly difficult to distinguish from real photographs.
This was a huge effort across @business departments @BBGVisualData @technology@BBGEquality, with edits from @ChloeWhiteaker, Jillian Ward, and help from @itskelseybutler @rachaeldottle @kyleykim @DeniseDSLu @mariepastora @pogkas @raeedahwahid @brittharr @_jsdiamond @DavidIngold