Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3563

Associated Incidents

Incident 62418 Report
Child Sexual Abuse Material Taints Image Generators

Loading...
Researchers found child abuse material in the largest AI image generation dataset
engadget.com · 2023

Researchers from the Stanford Internet Observatory say that a dataset used to train AI image generation tools contains at least 1,008 validated instances of child sexual abuse material. The Stanford researchers note that the presence of CSAM in the dataset could allow AI models that were trained on the data to generate new and even realistic instances of CSAM.

LAION, the non-profit that created the dataset, told 404 Media that it "has a zero tolerance policy for illegal content and in an abundance of caution, we are temporarily taking down the LAION datasets to ensure they are safe before republishing them." The organization added that, before publishing its datasets in the first place, it created filters to detect and remove illegal content from them. However, *404 *points out that LAION leaders have been aware since at least 2021 that there was a possibility of their systems picking up CSAM as they vacuumed up billions of images from the internet.

According to previous reports, the LAION-5B dataset in question contains "millions of images of pornography, violence, child nudity, racist memes, hate symbols, copyrighted art and works scraped from private company websites." Overall, it includes more than 5 billion images and associated descriptive captions (the dataset itself doesn't include any images but rather links to scraped images and alt text). LAION founder Christoph Schuhmann said earlier this year that while he was not aware of any CSAM in the dataset, he hadn't examined the data in great depth.

It's illegal for most institutions in the US to view CSAM for verification purposes. As such, the Stanford researchers used several techniques to look for potential CSAM. According to their paper, they employed "perceptual hash‐based detection, cryptographic hash‐based detection, and nearest‐neighbors analysis leveraging the image embeddings in the dataset itself." They found 3,226 entries that contained suspected CSAM. Many of those images were confirmed as CSAM by third parties such as PhotoDNA and the Canadian Centre for Child Protection.

Stability AI founder Emad Mostaque trained Stable Diffusion using a subset of LAION-5B data. The first research version of Google's Imagen text-to-image model was trained on LAION-400M, but that was never released; Google says that none of the following iterations of Imagen use any LAION datasets. A Stability AI spokesperson told Bloombergthat it prohibits the use of its test-to-image systems for illegal purposes, such as creating or editing CSAM."This report focuses on the LAION-5B dataset as a whole," the spokesperson said. "Stability AI models were trained on a filtered subset of that dataset. In addition, we fine-tuned these models to mitigate residual behaviors."

Stable Diffusion 2 (a more recent version of Stability AI's image generation tool) was trained on data that substantially filtered out 'unsafe' materials from the dataset. That, *Bloomberg *notes, makes it more difficult for users to generate explicit images. However, it's claimed that Stable Diffusion 1.5, which is still available on the internet, does not have the same protections. "Models based on Stable Diffusion 1.5 that have not had safety measures applied to them should be deprecated and distribution ceased where feasible," the Stanford paper's authors wrote.

Correction, 4:30PM ET: This story originally stated that Google's Imagen tool used a subset of LAION-5B data. The story has been updated to note that Imagen used LAION-400M in its first research version, but hasn't used any LAION data since then. We apologize for the error.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd