Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Entidades

GPT-4

Incidents involved as Deployer

Incidente 6771 Reporte
ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

2024-04-29

The "Dan" ("Do Anything Now") AI boyfriend is a trend on TikTok in which users appear to regularly manipulate ChatGPT to adopt boyfriend personas, breaching content policies. ChatGPT 3.5 is reported to regularly produce explicitly sexual content, directly violating its intended safety protocols. GPT-4 and Perplexity AI were subjected to similar manipulations, and although they exhibited more resistance to breaches, some prompts were reported to break its guidelines.

Más

Incidents implicated systems

Incidente 9974 Reportes
Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

2023-02-28

Court records reveal that Meta employees allegedly discussed pirating books to train LLaMA 3, citing cost and speed concerns with licensing. Internal messages suggest Meta accessed LibGen, a repository of over 7.5 million pirated books, with apparent approval from Mark Zuckerberg. Employees allegedly took steps to obscure the dataset’s origins. OpenAI has also been implicated in using LibGen.

Más

Incidente 9952 Reportes
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

2023-12-27

The New York Times alleges that OpenAI and Microsoft used millions of its articles without permission to train AI models, including ChatGPT. The lawsuit claims the companies scraped and reproduced copyrighted content without compensation, in turn undermining the Times’s business and competing with its journalism. Some AI outputs allegedly regurgitate Times articles verbatim. The lawsuit seeks damages and demands the destruction of AI models trained on its content.

Más

Incidente 10442 Reportes
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

2025-04-15

Researchers reportedly traced the appearance of the nonsensical phrase "vegetative electron microscopy" in scientific papers to contamination in AI training data. Testing indicated that large language models such as GPT-3, GPT-4, and Claude 3.5 may reproduce the term. The error allegedly originated from a digitization mistake that merged unrelated words during scanning, and a later translation error between Farsi and English.

Más

Incidente 10281 Reporte
OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

2025-02-07

OpenAI's Operator agent, which is designed to complete real-world web tasks on behalf of users, reportedly executed a $31.43 grocery delivery purchase without user consent. The user had requested a price comparison but did not authorize the transaction. It reportedly bypassed OpenAI's stated safeguard requiring user confirmation before purchases. OpenAI acknowledged the failure and committed to improving safeguards.

Más

Entidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
 

Entity

TikTok users

Incidents involved as Deployer
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Más
Entity

Julia Munslow

Incidents involved as Deployer
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Más
Entity

ChatGPT

Incidents involved as Deployer
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents implicated systems
  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

  • Incidente 1031
    1 Report

    Transgender User Alleges ChatGPT Allowed Suicide Letter Without Crisis Intervention

Más
Entity

GPT-3.5

Incidents involved as Deployer
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Más
Entity

Perplexity AI

Afectado por Incidentes
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents involved as Deployer
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Más
Entity

OpenAI

Incidentes involucrados como desarrollador e implementador
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Afectado por Incidentes
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Incidents involved as Developer
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Más
Entity

Perplexity.ai

Incidents involved as Developer
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Más
Entity

General public

Afectado por Incidentes
  • Incidente 677
    1 Report

    ChatGPT and Perplexity Reportedly Manipulated into Breaking Content Policies in AI Boyfriend Scenarios

Más
Entity

Microsoft

Incidentes involucrados como desarrollador e implementador
  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

The New York Times

Afectado por Incidentes
  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

Journalists

Afectado por Incidentes
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

Journalism

Afectado por Incidentes
  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

Media organizations

Afectado por Incidentes
  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

publishers

Afectado por Incidentes
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

Writers

Afectado por Incidentes
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

Microsoft Bing Chat

Incidents implicated systems
  • Incidente 995
    2 Report

    The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content

Más
Entity

Meta

Incidentes involucrados como desarrollador e implementador
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Más
Entity

Authors

Afectado por Incidentes
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Más
Entity

Academic researchers

Afectado por Incidentes
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Más
Entity

OpenAI models

Incidents implicated systems
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Más
Entity

Llama 3

Incidents implicated systems
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Más
Entity

Library Genesis (LibGen)

Incidents implicated systems
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Más
Entity

BitTorrent

Incidents implicated systems
  • Incidente 997
    4 Report

    Meta and OpenAI Accused of Using LibGen’s Pirated Books to Train AI Models

Más
Entity

Geoffrey A. Fowler

Afectado por Incidentes
  • Incidente 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Más
Entity

Users of Operator

Afectado por Incidentes
  • Incidente 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Más
Entity

Operator

Incidents implicated systems
  • Incidente 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Más
Entity

Instacart

Incidents implicated systems
  • Incidente 1028
    1 Report

    OpenAI's Operator Agent Reportedly Executed Unauthorized $31.43 Transaction Despite Safety Protocol

Más
Entity

Miranda Jane Ellison

Afectado por Incidentes
  • Incidente 1031
    1 Report

    Transgender User Alleges ChatGPT Allowed Suicide Letter Without Crisis Intervention

Más
Entity

Anthropic

Incidentes involucrados como desarrollador e implementador
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Researchers

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Incidents involved as Deployer
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Scientific authors

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Incidents involved as Deployer
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Scientific publishers

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Peer reviewers

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Scholars

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Readers of scientific publications

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Scientific record

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Academic integrity

Afectado por Incidentes
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

GPT-3

Incidents implicated systems
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Claude 3.5

Incidents implicated systems
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más
Entity

Common Crawl

Incidents implicated systems
  • Incidente 1044
    2 Report

    Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Más

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e