Affecté par des incidents

Incident 62416 Rapports
Child Sexual Abuse Material Taints Image Generators

2023-12-20

The LAION-5B dataset (a commonly used dataset with more than 5 billion image-description pairs) was found by researchers to contain child sexual abuse material (CSAM), which increases the likelihood that downstream models will produce CSAM imagery. The discovery taints models built with the LAION dataset requiring many organizations to retrain those models. Additionally, LAION must now scrub the dataset of the imagery.

Plus

Incidents involved as Developer

Incident 62416 Rapports
Child Sexual Abuse Material Taints Image Generators

2023-12-20

The LAION-5B dataset (a commonly used dataset with more than 5 billion image-description pairs) was found by researchers to contain child sexual abuse material (CSAM), which increases the likelihood that downstream models will produce CSAM imagery. The discovery taints models built with the LAION dataset requiring many organizations to retrain those models. Additionally, LAION must now scrub the dataset of the imagery.

Plus

Incident 42112 Rapports
Stable Diffusion Allegedly Used Artists' Works without Permission for AI Training

2022-11-20

Text-to-image model Stable Diffusion was reportedly using artists' original works without permission for its AI training.

Plus

Incident 4235 Rapports
Lensa AI's Produced Unintended Sexually Explicit or Suggestive "Magic Avatars" for Women

2022-11-22

Lensa AI's "Magic Avatars" were reportedly generating sexually explicit and sexualized features disproportionately for women and Asian women despite not submitting any sexual content.

Plus

Incident 4515 Rapports
Stable Diffusion's Training Data Contained Copyrighted Images

2022-10-16

Stability AI reportedly scraped copyrighted images by Getty Images to be used as training data for Stable Diffusion model.

Plus

Entités associées