Incident 423: Lensa AI's Produced Unintended Sexually Explicit or Suggestive "Magic Avatars" for Women

Description: Lensa AI's "Magic Avatars" were reportedly generating sexually explicit and sexualized features disproportionately for women and Asian women despite not submitting any sexual content.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Lensa AI , Stability AI , Runway , LAION , EleutherAI and CompVis LMU developed an AI system deployed by Lensa AI, which harmed women using Lensa AI and Asian women using Lensa AI.

Incident Stats

Incident ID
423
Report Count
5
Incident Date
2022-11-22
Editors
Khoa Lam
‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos
wired.com · 2022

This weekend, the photo-editing app Lensa flooded social media with celestial, iridescent, and anime-inspired “magic avatars.” As is typical in our milkshake-duck internet news cycle, arguments as to why using the app was problematic prolif…

How Is Everyone Making Those A.I. Selfies?
nytimes.com · 2022

Have you noticed that many of your friends are suddenly fairy princesses or space travelers? Is your Instagram feed overrun with Renaissance-style paintings of people who were definitely born in the ’90s? If so, you are entitled to an expla…

Feeling Uncomfortable About All These AI-Generated Images? This Might Be Why.
theskimm.com · 2022

In December, our feeds started getting flooded with AI-generated images of our friends. Thanks to the Lensa AI app. It quickly became the top photo app in the Apple app store — one analysis found that more than 20 million people have alread…

The viral AI avatar app Lensa undressed me—without my consent
technologyreview.com · 2022

Stability.AI, the company that developed Stable Diffusion, launched a new version of the AI model in late November. A spokesperson says that the original model was released with a safety filter, which Lensa does not appear to have used, as …

Lensa’s viral AI art creations were bound to hypersexualize users
polygon.com · 2022

This year, it feels like artificial intelligence-generated art has been everywhere.

In the summer, many of us entered goofy prompts into DALL-E Mini (now called Craiyon), yielding a series of nine comedically janky AI-generated images. But …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.