Description: Hidden prompts reportedly were discovered in at least 17 academic preprints on arXiv that purportedly instructed AI tools to deliver only positive peer reviews. The lead authors are reportedly affiliated with 14 institutions in eight countries, including Waseda University, KAIST, Peking University, and the University of Washington. The alleged concealed instructions, some of which were reportedly embedded using white text or tiny fonts, were purportedly intended to influence any reviewers who rely on AI tools.
Entities
View all entitiesAlleged: Unnamed large language model developers developed an AI system deployed by Unnamed peer reviewers and Unnamed conference paper reviewers, which harmed Peer review process , Academic integrity , Academic conferences and Research community.
Alleged implicated AI system: Unnamed large language models
Incident Stats
Incident ID
1135
Report Count
1
Incident Date
2025-07-01
Editors
Daniel Atherton
Incident Reports
Reports Timeline
TOKYO -- Research papers from 14 academic institutions in eight countries -- including Japan, South Korea and China -- contained hidden prompts directing artificial intelligence tools to give them good reviews, Nikkei has found.
Nikkei look…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents