Incident 465: Generative Models Trained on Dataset Containing Private Medical Photos

Description: Text-to-image models trained using the LAION-5B dataset such as Stable Diffusion and Imagen were able to regurgitate private medical record photos which were used as training data without consent or recourse for removal.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Stability AI , Google and LAION developed an AI system deployed by Stability AI and Google, which harmed people having medical photos online.

Incident Stats

Incident ID
465
Report Count
1
Incident Date
2022-03-03
Editors
Khoa Lam

Incident Reports

Artist finds private medical record photos in popular AI training data set
arstechnica.com · 2022

Late last week, a California-based AI artist who goes by the name Lapine discovered private medical record photos taken by her doctor in 2013 referenced in the LAION-5B image set, which is a scrape of publicly available images on the web. A…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents