Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3555

Associated Incidents

Incident 62418 Report
Child Sexual Abuse Material Taints Image Generators

Loading...
Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material
404media.co · 2023

This piece is published with support from The Capitol Forum.

The LAION-5B machine learning dataset used by Stable Diffusion and other major AI products has been removed by the organization that created it after a Stanford study found that it contained 3,226 suspected instances of child sexual abuse material, 1,008 of which were externally validated.

LAION told 404 Media on Tuesday that out of "an abundance of caution," it was taking down its datasets, including LAION-5B and another called LAION-400M temporarily "to ensure they are safe before republishing them."

According to a new study by the Stanford Internet Observatory shared with 404 Media ahead of publication, the researchers found the suspected instances of CSAM through a combination of perceptual and cryptographic hash-based detection and analysis of the images themselves.

"We find that having possession of a LAION‐5B dataset populated even in late 2023 implies the possession of thousands of illegal images---not including all of the intimate imagery published and gathered non‐consensually, the legality of which is more variable by jurisdiction," the paper says. "While the amount of CSAM present does not necessarily indicate that the presence of CSAM drastically influences the output of the model above and beyond the model's ability to combine the concepts of sexual activity and children, it likely does still exert influence. The presence of repeated identical instances of CSAM is also problematic, particularly due to its reinforcement of images of specific victims."

The finding highlights the danger of largely indiscriminate scraping of the internet for the purposes of generative artificial intelligence. 

Large-scale Artificial Intelligence Open Network, or LAION, is a non-profit organization that creates open-source tools for machine learning. LAION-5B is one of its biggest and most popular products. It is made up of more than five billion links to images scraped from the open web, including user-generated social media platforms, and is used to train the most popular AI generation models currently on the market. Stable Diffusion, for example, uses LAION-5B, and Stability AI funded its development. 

"If you have downloaded that full dataset for whatever purpose, for training a model for research purposes, then yes, you absolutely have CSAM, unless you took some extraordinary measures to stop it," David Thiel, lead author of the study and Chief Technologist at the Stanford Internet Observatory told 404 Media. 

Public chats from LAION leadership in the organization's official Discord server show that they were aware of the possibility of CSAM being scraped into their datasets as far back as 2021.

"I guess distributing a link to an image such as child porn can be deemed illegal," LAION lead engineer Richard Vencu wrote in response to a researcher asking how LAION handles potential illegal data that might be included in the dataset. "We tried to eliminate such things but there's no guarantee all of them are out." 

SCREENSHOT VIA THE LAION DISCORD

Most institutions in the US, including Thiel's team, aren't legally allowed to view CSAM in order to verify it themselves. To do CSAM research, experts often rely on perceptual hashing, which extracts a unique digital signature, or fingerprint, from an image or video. PhotoDNA is a technology that creates unique hashes for images of child exploitation in order to find those images elsewhere on the web and get them removed or pursue abusers or proliferators.

"With the goal of quantifying the degree to which CSAM is present in the training dataset as well as eliminating it from both LAION‐5B and derivative datasets, we use various complementary techniques to identify potential CSAM in the dataset: perceptual hash‐based detection, cryptographic hash‐based detection, and nearest‐neighbors analysis leveraging the image embeddings in the dataset itself," the paper says. Through this process, they identified at least 2,000 dataset entries of suspected CSAM, and confirmed those entries with third parties.

To do their research, Thiel said that he focused on URLs identified by LAION's safety classifier as "not safe for work" and sent those URLs to PhotoDNA. Hash matches indicate definite, known CSAM, and were sent to the Project Arachnid Shield API and validated by Canadian Centre for Child Protection, which is able to view, verify, and report those images to the authorities. Once those images were verified, they could also find "nearest neighbor" matches within the dataset, where related images of victims were clustered together.

LAION could have used a method similar to this before releasing the world's largest AI training dataset, Thiel said, but it didn't. "[LAION] did initially use CLIP to try and filter some things out, but it does not appear that they did that in consultation with any child safety experts originally. It was good that they tried. But the mechanisms they used were just not super impressive," Thiel said. "They made an attempt that was not nearly enough, and it is not how I would have done it if I were trying to design a safe system."

A spokesperson for LAION told 404 Media in a statement about the Stanford paper: 

"LAION is a non-profit organization that provides datasets, tools and models for the advancement of machine learning research. We are committed to open public education and the environmentally safe use of resources through the reuse of existing datasets and models.  LAION datasets (more than 5.85 billion entries) are sourced from the freely available Common Crawl web index and offer only links to content on the public web, with no images. We developed and published our own rigorous filters to detect and remove illegal content from LAION datasets before releasing them. We collaborate with universities, researchers and NGOs to improve these filters and are currently working with the Internet Watch Foundation (IWF) to identify and remove content suspected of violating laws. We invite Stanford researchers to join LAION to improve our datasets and to develop efficient filters for detecting harmful content.  LAION has a zero tolerance policy for illegal content and in an abundance of caution, we are temporarily taking down the LAION datasets to ensure they are safe before republishing them."

This study follows a June paper by Stanford that examined the landscape of visual generative models that could be used to create CSAM. Thiel told me he continued to pursue the topic after a tip from AI researcher Alex Champandard, who found a URL of an image in LAION-5B on Hugging Face that was captioned with a phrase in Spanish that appeared to describe child exploitation material. LAION-5B is available for download from Hugging Face as an open-source tool.

Champandard told me he noticed a report to Hugging Face on LAION-5B in August 2022, flagging "an example that describes something related to pedophilia." One of the engineers who worked on LAION-5B responded in March 2023, saying the link was dead but they'd removed it anyway because the caption was inappropriate. 

"It took 7 months for that report to get dealt with by Hugging Face or LAION --- which I found to be highly questionable," Champandard said.

Following Champandard's tweets, Hugging Face's chief ethics scientist Margaret Mitchell wrote on Mastodon: "I just wanted to pop in to say that there has been a lot of time and energy spent on trying to find CSAM, and none has been found. Some people at HF are being attacked as if pedophiles but it's just...inappropriate cruelty." 

I asked Hugging Face whether, in light of this study and before LAION removed the datasets themselves, it would take action against datasts that were found to have links to CSAM. A spokesperson for the company said, "Yes."

"Datasets cannot be seen by Hugging Face staff (nor anyone accessing the Hub) until they are uploaded, and the uploader can decide to make the content public. Once shared, the platform runs content scanning to identify potential issues. Users are responsible for uploading and maintaining content, and staff addresses issues following the Hugging Face platform's content guidelines, which we continue to adapt. The platform relies on a combination of technical content analysis to validate that the guidelines are indeed followed, community moderation, and reporting features to allow users to raise concerns. We monitor reports and take actions when infringing content is flagged," the Hugging Face spokesperson said. "Critical to this discussion is noting that the LAION-5B dataset contains URLs to external content, not images, which poses additional challenges. We are working with civil society and industry partners to develop good practices to handle these kinds of cross-platform questions."

The Stanford paper says that the material detected during their process is "inherently a significant undercount due to the incompleteness of industry hash sets, attrition of live hosted content, lack of access to the original LAION reference image sets, and the limited accuracy of 'unsafe' content classifiers."

Several major generative AI companies, Stable Diffusion, use LAION-5B, while others have used LAION's products in different stages of development. "LAION datasets have also been used to train other models, such as Google's Imagen, which was trained on a combination of internal datasets and LAION‐400M," the Stanford paper states. "Notably, during an audit of the LAION‐400M, Imagen's developers found 'a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes' and deemed it unfit for public use."

Following publication of the paper, a Google spokesperson told 404 Media: "Imagen has never used LAION-5B. More specifically, LAION-400M was used to train the first Imagen research model only, which was never released. None of the following iterations of the model use any version of LAION datasets." 400-M was also removed by LAION, out of an "abundance of caution" concerning the findings of the paper.

A spokesperson for Stable Diffusion told 404 Media following publication of the paper: "Stable Diffusion 1.5 was released by RunwayML, not Stability AI. This report focuses on the LAION-5b dataset as a whole. Stability AI models were trained on a filtered subset of that dataset. In addition, we subsequently fine-tuned these models to mitigate residual behaviors. We are committed to preventing the misuse of AI and prohibit the use of our image models and services for unlawful activity, including attempts to edit or create CSAM. Stability AI only hosts versions of Stable Diffusion that include filters on its API. These filters remove unsafe content from reaching the models. By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content. Additionally, we have implemented filters to intercept unsafe prompts or unsafe outputs when users interact with models on our platform. We have also invested in content labelling features to help identify images generated on our platform. These layers of mitigation make it harder for bad actors to misuse AI."

HOW DID THIS HAPPEN?

Child abuse material likely got into LAION because the organization compiled the dataset using tools that scrape the web, and CSAM isn't relegated to the realm of the "dark web," but proliferates on the open web and on many mainstream platforms. In 2022, Facebook made more than 21 million reports of CSAM to the National Center for Missing and Exploited Children (NCMEC) tipline, while Instagram made 5 million reports, and Twitter made 98,050. 

In the US, electronic service providers [ESP] are required by law to report "apparent child pornography" to NCMEC's CyberTipline when they become aware of them, but "there are no legal requirements for proactive efforts to detect this content or what information an ESP must include in a CyberTipline report," according to NCMEC. A dataset, however, is different from a website, even if it is composed of data from a huge number of websites. 

"Because it's the internet, there are going to be datasets that have child porn. Twitter's got it. You know, Facebook has it. It's all sitting there. They don't do a good job of policing for it, even though they claim that they do. And that's now going to be used to train these models," Marcus Rogers, Assistant Dean for Cybersecurity Initiatives at Purdue University, told 404 Media. Organizations building datasets, however, may be intentionally ignoring the possibility that CSAM could pollute their models, he said. "Companies just don't want to know. Some of it is just, even if they wanted to know they literally have lost control of everything."

"I think the reason that they probably ignore it is because they don't have a solution," Bryce Westlake, an associate professor in the Department of Justice Studies and a faculty member of the department's Forensic Science program, told 404 Media. "So they don't want to bring attention to it. Because if they bring attention to it, then something's going to have to be done about that." The interventions dataset creators could make would be labor intensive, he said, and even with those efforts in place they might not rid the sets of all of it, he said. "It's impossible for them to get rid of all of it. The only answer that society will accept is that you have 0% in there, and it's impossible to do. They're in a no win situation, so they think it's better that people just don't know."

HOW CSAM IN DATASETS AFFECTS REAL PEOPLE 

In a dataset of five billion entries, 3,226 might seem like a drop in an ocean of data. But there are several ways CSAM scraped into LAION's datasets could make things worse for real-life victims.

Dan Sexton, chief technology officer at the UK-based Internet Watch Foundation, told me that the goal for internet safety groups is to prevent more people from viewing or spreading abusive content and to get it offline entirely. We spoke months before Stanford's paper came out, when we didn't know for sure that child abuse material was being scraped into large datasets. "[Victims] knowing that their content is in a dataset that's allowing a machine to create other images---which have learned from their abuse---that's not something I think anyone would have expected to happen, but it's clearly not a welcome development. For any child that's been abused and their imagery circulated, excluding it anywhere on the internet, including datasets, is massive," he said. 

"There's no reason that images of children being sexually abused should ever be in those datasets"

Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection (C3P), told me that he imagines past victims of child sexual abuse would be "absolutely disgusted, no doubt, but probably not necessarily surprised" to learn their images are linked in a dataset like LAION-5B. "They've known for so long that they've had to deal with their images or images and videos circulating on the internet. Some reasonable technical things that could be done for the last well-over a decade, they just haven't been done right," he said. 

"I don't think anyone wants to create a tool that creates images of children being sexually abused, even if it's accidental," Sexton said. "AI is all about having good data, and if you put bad data in, you're going to get bad data out. Of course, this is bad data. You don't want to generate or scrape images of child sexual abuse." 

Until now, it's been theorized that AI models that are capable of creating child sexual abuse imagery were combining concepts of explicit adult material and non-explicit images of children to create AI-generated CSAM. According to Stanford's report, real abuse imagery is helping train models.

Artificially generated CSAM is on the rise, and it has the potential to jam up hotlines and deter resources from reporting agencies that work with law enforcement to find perpetrators and get it taken offline. The Internet Watch Foundation recently released a report saying that AI CSAM is "visually indistinguishable from real CSAM," even to trained analysts. Earlier this month, a 404 Media investigation found people using popular image generation platform Civitai were creating what "could be considered child pornography." And in May, the National Center for Missing and Exploited Children, a victim advocacy organization which runs a hotline for reporting CSAM, said it was preparing for a "flood" of artificially generated content. 

Richardson told me that actual CSAM training models could mean more realistic abusive deepfakes of victims. "You could have an offender download Stable Diffusion, create a LoRA [Low-Rank Adaptation, a more narrowly-tuned deep learning model] for a specific victim, and start generating new imagery on this victim," he said. Even if the victim's abuse was long in the past and now they're an adult, "now they're having new material created of them based on the existing CSAM that was out there," he said. "So that's hugely problematic." 

"There's no reason that images of children being sexually abused should ever be in those datasets, both to be sure the models themselves don't create undesirable results, but also for those victims to make sure their imagery is not continually and still being used for harmful purposes," Sexton said.

OctoML, the engine that powers a16z funded Civitai, thought the images could qualify as "child pornography," but ultimately decided to keep working with the company anyway, internal Slack chats and other material shows.

"Given what it's used to train, you can't argue that it's just like having a copy of the internet, so you're going to have some stuff on there that's bad or somehow illegal," Thiel said. "You're operationalizing it by training the models on those things. And given that you have images that will repeat over and over in that dataset that makes the model more likely to not just represent the material, but you'd have the potential for resemblance to occur of actual people that were fed into the data set." 

WHO'S RESPONSIBLE?

Legally, there is no precedent yet for who's responsible when a scraping tool gathers illegal imagery. As Vencu noted in his Discord message in 2021, LAION is disseminating links, not actual copies of images. "Since we are not distributing or deriving other images from originals, I do not think the image licensing apply," he said in Discord to the question about whether illegal material was in the dataset. 

Copyright infringement has been a major concern for artists and content creators whose imagery is being used to train AI models. In April, a German stock photographer asked LAION to exclude his photos from its datasets, and LAION responded by invoicing him for $979, claiming he filed an unjustified copyright claim. Earlier this year, a group of artists filed a class-action law­suit against Sta­bil­ity AI, DeviantArt, and Mid­jour­ney for their use of image generator Sta­ble Dif­fu­sion, which uses LAION's datasets. And Getty Images recently sued Stability AI, claiming that the company copied more than 12 million images without permission.

"We have issues with those services, how they were built, what they were built upon, how they respect creator rights or not, and how they actually feed into deepfakes and other things like that," Getty Images CEO Craig Peters told the Associated Press.

Spreading CSAM is a federal crime, and the US laws about it are extremely strict. It is of course illegal to possess or transmit files, but "undeveloped film, undeveloped videotape, and electronically stored data that can be converted into a visual image of child pornography" are also illegal under federal law. It's not clear where URLs that link to child exploitation images would land under current laws, or at what point anyone using these datasets could potentially be in legal jeopardy.  

Because anti CSAM laws are understandably so strict, researchers have had to figure out new ways of studying its spread without breaking the law themselves. Westlake told me he relies on outsourcing some research to colleagues in Canada, such as the C3P, to verify or clean data, where there are CSAM laws that carve out exceptions for research purposes. Stanford similarly sent its methodology to C3P for verification. The Internet Watch Foundation has a memorandum of understanding granted to them by the Crown Prosecution Service, the principal public criminal prosecuting agency in the UK, to download, view, and hold content for its duties, which enables it to proactively search for abusive content and report it to authorities. In the US, viewing, searching for, or possessing child exploitation material, even if accidentally, is a federal crime. 

"Places should no longer host those datasets for download."

Rogers and his colleague Kathryn Seigfried-Spellar at Purdue's forensics department have a unique situation: They're deputized, and have law enforcement status granted to them by local law enforcement to do their work. They have a physical space in a secure law enforcement facility, with surveillance cameras, key fobs, a secured network, and 12-factor identification where they must go if they want to do work like cleaning datasets or viewing CSAM for research or investigative purposes. 

Even so, they're incredibly careful about what they collect with scraping tools. Siegfried-Spellar told me she's working on studying knuckles and hands because they often appear in abuse imagery and are as identifiable as faces, and could scrape images from NSFW Reddit forums where people post images of themselves masturbating, but she doesn't because of the risk of catching underage imagery in the net. 

"Even though you have to be over the age of 18 to use Reddit, I am never going to go and scrape that data and use it, or analyze it for my research, because I can't verify that somebody really is over the age of 18 that posted that," she said. "There have been conversations about that as well: 'there's pictures on the internet, why can't I just scrape and use that for my algorithm training?' But it's because I need to know the age of the sources." 

WHAT TO DO NOW

Because LAION-5B is open-source, lots of copies are floating around publicly, including on Hugging Face. Removing the dataset from Hugging Face, pulling CSAM links to abusive imagery out of the dataset, and then reuploading it, for example, would essentially create a roadmap for someone determined to view those files by comparing the differences between the two. 

Thiel told me that he went into this study thinking the goal might be to get abusive material out of datasets, but now he believes it's too late.

"Now I'm more of the opinion that [the LAION datasets] kind of just need to be scratched," he said. "Places should no longer host those datasets for download. Maybe there's an argument for keeping copies of it for research capacity, and then you can go through and take some steps to clean it." 

There is a precedent for this, especially when it comes to children's data. The Federal Trade Commission has a term for model deletion as damage control: algorithm disgorgement. As an enforcement strategy, the FTC has used algorithm disgorgement in five cases involving tech companions that built models on improperly-obtained data, including a settlement with Amazon in May over charges that Alexa voice recordings violated children's privacy, and a settlement between the FTC and Department of Justice and a children's weight loss app that allegedly failed to properly verify parental consent. Both cases invoked the Children's Online Privacy Protection Act (COPPA).

Child safety and AI are quickly becoming the next major battleground of the internet. In April, Democratic senator Dick Durbin introduced the "STOP CSAM Act," which would make it a crime for providers to "knowingly host or store" CSAM or "knowingly promote or facilitate" the sexual exploitation of children, create a new federal crime for online services that "knowingly promote or facilitate" child exploitation crimes, and amend Section 230---the law that shields platforms from liability for their users' actions---to allow for civil lawsuits by victims of child exploitation crimes against online service providers. Privacy advocates including the Electronic Frontier Foundation and the Center for Democracy and Technology oppose the act, warning that it could undermine end-to-end encryption services. The inclusion of "apparent" CSAM widens the net too much, they say, and the terms "promote" and "facilitate" are overly broad. It could also have a chilling effect on free speech overall: "First Amendment-protected content involving sexuality, sexual orientation, or gender identity will likely be targets of frivolous takedown notices," EFF attorneys and surveillance experts wrote in a blog post.

In September, attorneys general from 50 states called on federal lawmakers to study how AI-driven exploitation can endanger children. "We are engaged in a race against time to protect the children of our country from the dangers of AI," the prosecutors wrote. "Indeed, the proverbial walls of the city have already been breached. Now is the time to act."

Thiel said he hadn't communicated with LAION before the study was released. "We're not intending this as some kind of gotcha for any of the parties involved. But obviously a lot of very important mistakes were made in various parts of this whole pipeline," he said. "And it's really just not how model training in the future should work at all." 

All of this is a problem that's not going away, even---or especially---if it's ignored. "They all have massive problems associated with massive data theft, non consensual, intimate images, Child Sexual Abuse material, you name it, it's in there. I'm kind of perplexed at how it's gone on this long," Richardson said. "It's not that the technology is necessarily bad... it's not that AI is bad. It's the fact that a bunch of things were blindly stolen, and now we're trying to put all these Band-aids to fix something that really never should have happened in the first place."

Become a paid subscriber for unlimited, ad-free articles and access to bonus content. This site is funded by subscribers and you will be directly powering our journalism.

Update 12/20, 8:19 a.m. EST: This headline was edited to remove the word "suspected" because 1,008 entries were externally validated.

Update 12/20, 11:20 a.m. EST: This story was corrected to reflect Common Crawl's inability to crawl Twitter, Instagram and Facebook.

Update 12/20, 1:32 p.m. EST with comment from Google about its usage of LAION's products. This story has been corrected to reflect that Google trained Imagen on a subset (and earlier version) of LAION-5B called LAION-400M. Its current products do not use LAION datasets. This article was also updated with comment from Stability AI.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd