Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1044: Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination

Description: Researchers reportedly traced the appearance of the nonsensical phrase "vegetative electron microscopy" in scientific papers to contamination in AI training data. Testing indicated that large language models such as GPT-3, GPT-4, and Claude 3.5 may reproduce the term. The error allegedly originated from a digitization mistake that merged unrelated words during scanning, and a later translation error between Farsi and English.
Editor Notes: Timeline notes: The phrase "vegetative electron microscopy" reportedly originated from a digitization error in 1950s scientific texts and was later reinforced by a translation mistake in papers published in 2017 and 2019. In 2025, researchers allegedly found that several large language models were reproducing the term, possibly due to training data contamination from Common Crawl.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI and Anthropic developed an AI system deployed by OpenAI , Anthropic , Researchers and Scientific authors, which harmed Researchers , Scientific authors , Scientific publishers , Peer reviewers , Scholars , Readers of scientific publications , Scientific record and Academic integrity.
Alleged implicated AI systems: GPT-3 , GPT-4 , Claude 3.5 and Common Crawl

Incident Stats

Incident ID
1044
Report Count
2
Incident Date
2025-04-15
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data
Incident Occurrence
A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

theconversation.com

As a nonsense phrase of shady provenance makes the rounds, Elsevier defends its use

As a nonsense phrase of shady provenance makes the rounds, Elsevier defends its use

retractionwatch.com

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data
theconversation.com · 2025

Earlier this year, scientists discovered a peculiar term appearing in published papers: "vegetative electron microscopy".

This phrase, which sounds technical but is actually nonsense, has become a "digital fossil" -- an error preserved and …

As a nonsense phrase of shady provenance makes the rounds, Elsevier defends its use
retractionwatch.com · 2025

The phrase was so strange it would have stood out even to a non-scientist. Yet "vegetative electron microscopy" had already made it past reviewers and editors at several journals when a Russian chemist and scientific sleuth noticed the odd …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports
OpenAI's GPT-3 Associated Muslims with Violence

OpenAI's GPT-3 Associated Muslims with Violence

Aug 2020 · 3 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
Gender Biases in Google Translate

Gender Biases in Google Translate

Apr 2017 · 10 reports
OpenAI's GPT-3 Associated Muslims with Violence

OpenAI's GPT-3 Associated Muslims with Violence

Aug 2020 · 3 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c