Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3552

Associated Incidents

Incident 62418 Report
Child Sexual Abuse Material Taints Image Generators

Large AI training data set removed after study finds child abuse material
cointelegraph.com · 2023

A widely-used artificial intelligence data set used to train Stable Diffusion, Imagen and other AI image generator models has been removed by its creator after a study found it contained thousands of instances of suspected child sexual abuse material. 

LAION --- also known as Large-scale Artificial Intelligence Open Network, is a German nonprofit organization that makes open-sourced artificial intelligence models and data sets used to train several popular text-to-image models.

A Dec. 20 report from researchers at the Stanford Internet Observatory's Cyber Policy Center said they identified 3,226 instances of suspected CSAM --- or child sexual abuse material --- in the LAION-5B data set, "much of which was confirmed as CSAM by third parties," according to Stanford Cyber Policy Center's Big Data Architect and Chief Technologist David Thiel.

Thiel noted that while the presence of CSAM doesn't necessarily mean it will "drastically" influence the output of models trained on the data set, it could still have some effect.

"While the amount of CSAM present does not necessarily indicate that the presence of CSAM drastically influences the output of the model above and beyond the model's ability to combine the concepts of sexual activity and children, it likely does still exert influence," said Thiel.

"The presence of repeated identical instances of CSAM is also problematic, particularly due to its reinforcement of images of specific victims," he added.

The LAION-5B dataset was released in March 2022 and includes 5.85 billion image-text pairs, according to LAION. 

In a statement, LAION said it has removed the data sets out of "an abundance of caution," including both LAION-5B and its LAION-400M, "to ensure they are safe before republishing them."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • eeb4352