Incident 579: Harmful Stereotyping of Non-Cisgendered People via Text-to-Image Systems

Description: Text-to-image systems such as DALL-E are allegedly generating biased and often insulting representations of non-cisgender identities. The systems tend to generate stereotypical and sexualized images when prompted with gender identity terms like "trans," "nonbinary," or "queer," highlighting systemic issues of bias.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: OpenAI developed an AI system deployed by DALL-E, which harmed Non-cisgender individuals and LGBTQ+ community.

Incident Stats

Incident ID
Report Count
Incident Date
Daniel Atherton

Incident Reports

Don’t ask DALL-E to Draw Trans People · 2023

One thing that I love about attending Queer in AI events (or queer community gatherings in general) is that I can assume that everyone around me is queer, too. I shift into a more comfortable, less guarded state. And conversely, it feels go…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents