Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

GPT-4

Incidents involved as Deployer

Incident 6771 Report
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

2024-04-29

The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.

More

Incidents implicated systems

Incident 9974 Report
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

2023-02-28

Court records reveal that Meta employees allegedly discussed pirating books to train LLaMA 3, citing cost and speed concerns with licensing. Internal messages suggest Meta accessed LibGen, a repository of over 7.5 million pirated books, with apparent approval from Mark Zuckerberg. Employees allegedly took steps to obscure the dataset’s origins. OpenAI has also been implicated in using LibGen.

More

Incident 9952 Report
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

2023-12-27

The New York Times alleges that OpenAI and Microsoft used millions of its articles without permission to train AI models, including ChatGPT. The lawsuit claims the companies scraped and reproduced copyrighted content without compensation, in turn undermining the Times’s business and competing with its journalism. Some AI outputs allegedly regurgitate Times articles verbatim. The lawsuit seeks damages and demands the destruction of AI models trained on its content.

More

Incident 10442 Report
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

2025-04-15

Researchers reportedly traced the appearance of the nonsensical phrase "vegetative electron microscopy" in scientific papers to contamination in AI training data. Testing indicated that large language models such as GPT-3, GPT-4, and Claude 3.5 may reproduce the term. The error allegedly originated from a digitization mistake that merged unrelated words during scanning, and a later translation error between Farsi and English.

More

Incident 10281 Report
OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

2025-02-07

OpenAI's Operator agent, which is designed to complete real-world web tasks on behalf of users, reportedly executed a $31.43 grocery delivery purchase without user consent. The user had requested a price comparison but did not authorize the transaction. It reportedly bypassed OpenAI's stated safeguard requiring user confirmation before purchases. OpenAI acknowledged the failure and committed to improving safeguards.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

TikTok users

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

More
Entity

Julia Munslow

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

More
Entity

ChatGPT

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents implicated systems
  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

  • Incident 1031
    1 Report

    Transgender User Alleges ChatGPT Allowed Suicide Letter Without Crisis Intervention

More
Entity

GPT-3.5

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

More
Entity

Perplexity AI

Incidents Harmed By
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

More
Entity

OpenAI

Incidents involved as both Developer and Deployer
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Incidents Harmed By
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents involved as Developer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

More
Entity

Perplexity.ai

Incidents involved as Developer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

More
Entity

General public

Incidents Harmed By
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

More
Entity

Microsoft

Incidents involved as both Developer and Deployer
  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

The New York Times

Incidents Harmed By
  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

Journalists

Incidents Harmed By
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

Journalism

Incidents Harmed By
  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

Media organizations

Incidents Harmed By
  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

publishers

Incidents Harmed By
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

Writers

Incidents Harmed By
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

Microsoft Bing Chat

Incidents implicated systems
  • Incident 995
    2 Reports

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

More
Entity

Meta

Incidents involved as both Developer and Deployer
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

More
Entity

Authors

Incidents Harmed By
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

More
Entity

Academic researchers

Incidents Harmed By
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

More
Entity

OpenAI models

Incidents implicated systems
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

More
Entity

Llama 3

Incidents implicated systems
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

More
Entity

Library Genesis (LibGen)

Incidents implicated systems
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

More
Entity

BitTorrent

Incidents implicated systems
  • Incident 997
    4 Reports

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

More
Entity

Geoffrey A. Fowler

Incidents Harmed By
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

More
Entity

Users of Operator

Incidents Harmed By
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

More
Entity

Operator

Incidents implicated systems
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

More
Entity

Instacart

Incidents implicated systems
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

More
Entity

Miranda Jane Ellison

Incidents Harmed By
  • Incident 1031
    1 Report

    Transgender User Alleges ChatGPT Allowed Suicide Letter Without Crisis Intervention

More
Entity

Anthropic

Incidents involved as both Developer and Deployer
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Researchers

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Incidents involved as Deployer
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Scientific authors

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Incidents involved as Deployer
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Scientific publishers

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Peer reviewers

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Scholars

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Readers of scientific publications

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Scientific record

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Academic integrity

Incidents Harmed By
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

GPT-3

Incidents implicated systems
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Claude 3.5

Incidents implicated systems
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More
Entity

Common Crawl

Incidents implicated systems
  • Incident 1044
    2 Reports

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c