Description: The New York Times alleges that OpenAI and Microsoft used millions of its articles without permission to train AI models, including ChatGPT. The lawsuit claims the companies scraped and reproduced copyrighted content without compensation, in turn undermining the Times’s business and competing with its journalism. Some AI outputs allegedly regurgitate Times articles verbatim. The lawsuit seeks damages and demands the destruction of AI models trained on its content.
Entities
View all entitiesAlleged: OpenAI , Microsoft , Microsoft Bing Chat , GPT-4 and ChatGPT developed and deployed an AI system, which harmed Writers , The New York Times , publishers , Media organizations , Journalists , Journalism , Journalistic integrity and Epistemic integrity.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
2.1. Compromise of privacy by obtaining, leaking or correctly inferring sensitive information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Privacy & Security
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.
Th…
Loading...
The New York Times sued OpenAI and Microsoft on Wednesday over the tech companies' use of its copyrighted articles to train their artificial intelligence technology, joining a growing wave of opposition to the tech industry's using creative…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

