Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Entités

GPT-4

Incidents involved as Deployer

Incident 6771 Rapport
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

2024-04-29

The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.

Plus

Incidents implicated systems

Incident 9974 Rapports
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

2023-02-28

Court records reveal that Meta employees allegedly discussed pirating books to train LLaMA 3, citing cost and speed concerns with licensing. Internal messages suggest Meta accessed LibGen, a repository of over 7.5 million pirated books, with apparent approval from Mark Zuckerberg. Employees allegedly took steps to obscure the dataset’s origins. OpenAI has also been implicated in using LibGen.

Plus

Incident 11064 Rapports
Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

2025-06-13

Multiple reports from March to June 2025 describe cases in which chatbots allegedly reinforced delusional beliefs, conspiracies, and dangerous behavior. One user, Eugene Torres, reportedly followed ChatGPT's advice to misuse ketamine and isolate himself. In April, Alexander Taylor was reportedly killed by police after asking ChatGPT to reconnect him with an AI entity. Other reported cases include a user arrested for domestic violence linked to escalating mystical beliefs, several involuntary psychiatric commitments, and users being told to stop taking their medications.

Plus

Incident 9952 Rapports
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

2023-12-27

The New York Times alleges that OpenAI and Microsoft used millions of its articles without permission to train AI models, including ChatGPT. The lawsuit claims the companies scraped and reproduced copyrighted content without compensation, in turn undermining the Times’s business and competing with its journalism. Some AI outputs allegedly regurgitate Times articles verbatim. The lawsuit seeks damages and demands the destruction of AI models trained on its content.

Plus

Incident 10442 Rapports
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

2025-04-15

Researchers reportedly traced the appearance of the nonsensical phrase "vegetative electron microscopy" in scientific papers to contamination in AI training data. Testing indicated that large language models such as GPT-3, GPT-4, and Claude 3.5 may reproduce the term. The error allegedly originated from a digitization mistake that merged unrelated words during scanning, and a later translation error between Farsi and English.

Plus

Entités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
 

Entity

TikTok users

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Plus
Entity

Julia Munslow

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Plus
Entity

ChatGPT

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents implicated systems
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

GPT-3.5

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Plus
Entity

Perplexity AI

Affecté par des incidents
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents involved as Deployer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Plus
Entity

OpenAI

Incidents impliqués en tant que développeur et déployeur
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Affecté par des incidents
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents involved as Developer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Plus
Entity

Perplexity.ai

Incidents involved as Developer
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Plus
Entity

General public

Affecté par des incidents
  • Incident 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Plus
Entity

Microsoft

Incidents impliqués en tant que développeur et déployeur
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

The New York Times

Affecté par des incidents
  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

Journalists

Affecté par des incidents
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

Journalism

Affecté par des incidents
  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

Media organizations

Affecté par des incidents
  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

publishers

Affecté par des incidents
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

Writers

Affecté par des incidents
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

Microsoft Bing Chat

Incidents implicated systems
  • Incident 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Plus
Entity

Meta

Incidents impliqués en tant que développeur et déployeur
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Plus
Entity

Authors

Affecté par des incidents
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Plus
Entity

Academic researchers

Affecté par des incidents
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Plus
Entity

OpenAI models

Incidents implicated systems
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Plus
Entity

Llama 3

Incidents implicated systems
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Plus
Entity

Library Genesis (LibGen)

Incidents implicated systems
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Plus
Entity

BitTorrent

Incidents implicated systems
  • Incident 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Plus
Entity

Geoffrey A. Fowler

Affecté par des incidents
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Plus
Entity

Users of Operator

Affecté par des incidents
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Plus
Entity

Operator

Incidents implicated systems
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Plus
Entity

Instacart

Incidents implicated systems
  • Incident 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Plus
Entity

Miranda Jane Ellison

Affecté par des incidents
  • Incident 1031
    1 Report

    Transgender User Alleges ChatGPT Allowed Suicide Letter Without Crisis Intervention

Plus
Entity

Anthropic

Incidents impliqués en tant que développeur et déployeur
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Researchers

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Incidents involved as Deployer
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Scientific authors

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Incidents involved as Deployer
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Scientific publishers

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Peer reviewers

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Scholars

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Readers of scientific publications

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Scientific record

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Academic integrity

Affecté par des incidents
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

GPT-3

Incidents implicated systems
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Claude 3.5

Incidents implicated systems
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Common Crawl

Incidents implicated systems
  • Incident 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Plus
Entity

Unnamed Copilot users

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

Unnamed ChatGPT users

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

OpenAI users

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

Eugene Torres

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

ChatGPT users

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

Andrew (surname withheld)

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

Allyson (surname withheld)

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

Alexander Taylor

Affecté par des incidents
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

Unnamed therapy chatbots

Incidents implicated systems
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

GPT-4o

Incidents implicated systems
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus
Entity

Copilot

Incidents implicated systems
  • Incident 1106
    4 Report

    Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Plus

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf