Incidents Harmed By

Incident 62416 Reports
Child Sexual Abuse Material Taints Image Generators

2023-12-20

The LAION-5B dataset (a commonly used dataset with more than 5 billion image-description pairs) was found by researchers to contain child sexual abuse material (CSAM), which increases the likelihood that downstream models will produce CSAM imagery. The discovery taints models built with the LAION dataset requiring many organizations to retrain those models. Additionally, LAION must now scrub the dataset of the imagery.

More

Incidents involved as Developer

Incident 62416 Reports
Child Sexual Abuse Material Taints Image Generators

2023-12-20

The LAION-5B dataset (a commonly used dataset with more than 5 billion image-description pairs) was found by researchers to contain child sexual abuse material (CSAM), which increases the likelihood that downstream models will produce CSAM imagery. The discovery taints models built with the LAION dataset requiring many organizations to retrain those models. Additionally, LAION must now scrub the dataset of the imagery.

More

Incident 42112 Reports
Stable Diffusion Allegedly Used Artists' Works without Permission for AI Training

2022-11-20

Text-to-image model Stable Diffusion was reportedly using artists' original works without permission for its AI training.

More

Incident 4235 Reports
Lensa AI's Produced Unintended Sexually Explicit or Suggestive "Magic Avatars" for Women

2022-11-22

Lensa AI's "Magic Avatars" were reportedly generating sexually explicit and sexualized features disproportionately for women and Asian women despite not submitting any sexual content.

More

Incident 4515 Reports
Stable Diffusion's Training Data Contained Copyrighted Images

2022-10-16

Stability AI reportedly scraped copyrighted images by Getty Images to be used as training data for Stable Diffusion model.

More

Related Entities