組織

LAION

影響を受けたインシデント

インシデント 62416 レポート
Child Sexual Abuse Material Taints Image Generators

2023-12-20

The LAION-5B dataset (a commonly used dataset with more than 5 billion image-description pairs) was found by researchers to contain child sexual abuse material (CSAM), which increases the likelihood that downstream models will produce CSAM imagery. The discovery taints models built with the LAION dataset requiring many organizations to retrain those models. Additionally, LAION must now scrub the dataset of the imagery.

Más

Incidents involved as Developer

インシデント 62416 レポート
Child Sexual Abuse Material Taints Image Generators

2023-12-20

The LAION-5B dataset (a commonly used dataset with more than 5 billion image-description pairs) was found by researchers to contain child sexual abuse material (CSAM), which increases the likelihood that downstream models will produce CSAM imagery. The discovery taints models built with the LAION dataset requiring many organizations to retrain those models. Additionally, LAION must now scrub the dataset of the imagery.

Más

インシデント 42112 レポート
Stable Diffusion Allegedly Used Artists' Works without Permission for AI Training

2022-11-20

Text-to-image model Stable Diffusion was reportedly using artists' original works without permission for its AI training.

Más

インシデント 4235 レポート
Lensa AI's Produced Unintended Sexually Explicit or Suggestive "Magic Avatars" for Women

2022-11-22

Lensa AI's "Magic Avatars" were reportedly generating sexually explicit and sexualized features disproportionately for women and Asian women despite not submitting any sexual content.

Más

インシデント 4515 レポート
Stable Diffusion's Training Data Contained Copyrighted Images

2022-10-16

Stability AI reportedly scraped copyrighted images by Getty Images to be used as training data for Stable Diffusion model.

Más

関連する組織