Description: A WIRED investigation found that OpenAI’s video generation model, Sora, exhibits representational bias across race, gender, body type, and disability. In tests using 250 prompts, Sora was more likely to depict CEOs and professors as men, flight attendants and childcare workers as women, and showed limited or stereotypical portrayals of disabled individuals and people with larger bodies.
Entities
View all entitiesAlleged: OpenAI and Sora developed and deployed an AI system, which harmed Marginalized groups , Women , people with disabilities , People with larger bodies , LGBTQ+ people , People of color and General public.
Alleged implicated AI system: Sora
Incident Stats
Incident ID
1000
Report Count
1
Incident Date
2025-03-23
Editors
Daniel Atherton
Incident Reports
Reports Timeline

Despite recent leaps forward in image quality, the biases found in videos generated by AI tools, like OpenAI’s Sora, are as conspicuous as ever. A WIRED investigation, which included a review of hundreds of AI-generated videos, has found th…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop
· 6 reports
Gender Biases in Google Translate
· 10 reports

Gender Biases of Google Image Search
· 11 reports
Similar Incidents
Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop
· 6 reports
Gender Biases in Google Translate
· 10 reports

Gender Biases of Google Image Search
· 11 reports