Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5000

Associated Incidents

Incident 9974 Report
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Loading...
Court docs allege Meta trained AI model using LibGen
theregister.com · 2025

Meta allegedly downloaded material from an online source that’s been sued for breaching copyright, because it wanted the material to train its AI models, according to a new court filing.

The accusation was made in a document [PDF] filed in the case of Richard Kadrey et al vs Meta Platforms, in which novelist Kadrey (and others including comedian Sarah Silverman) allege stolen versions of their work were used to train AI models. Several similar suits are in motion, targeting different AI players.

The document claims that Meta decided to download documents from Library Genesis – aka “LibGen” to train its models. LibGen is the subject of a lawsuit brought by textbook publishers who believe it happily hosts and distributes stolen works, and even accepts donations to fund its operations.

The filing from plaintiffs in the Kadrey case claims that documents produced by Meta during the discovery process – the pre-trial activity of gathering relevant documents – describe internal debate about accessing LibGen, a little squeamishness about using BitTorrent in the office to do so, and eventual escalation to “MZ” who approved use of the contentious resource. The filing states that evidence about use of LibGen is new and was made available by Meta late in the discovery process.

Another filing [PDF] claims that a Meta document describes how it removed copyright notifications from material downloaded from LibGen, and suggests the company did so because it realized including such text could mean a model’s output would reveal it was trained on copyrighted material.

A third document [PDF], this one filed by Meta, argues that the plaintiffs have unjustifiably claimed that use of LibGen is new material and contends that it was on the record for months.

The nub of the matter appears to be an attempt by the plaintiffs to use the info about Meta’s user of LibGen to add an action under the California Comprehensive Computer Data Access and Fraud Act. That law makes it a crime to access a computer or network without permission with the intent to defraud or commit other crimes. Meta doesn’t think the extra action is justified.

Meta’s filing includes a statement that the company “rejects the notion that it has ‘distributed’ LibGen”, seemingly to address plaintiff’s arguments that merely using BitTorrent meant it spread stolen content to others. But if there’s a denial that LibGen was accessed, we can’t find it.

Meta tried to have the filings we’ve linked to above sealed on grounds of commercial sensitivity. The judge in the case rejected that, arguing that Meta just wants to avoid publicity.

US District Court Judge Vince Chhabria also noted that in one of the documents Meta wants to seal, an employee wrote the following:

Sorry if we undermined you, Zuck.

The allegation of using LibGen is very on-brand for Meta, given its business model is built on free content contributed by users. Why should pesky authors be treated any different? ®

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd