Description: NTB allegedly published a news story about Telenor's annual security report that contained purportedly AI-generated errors, including five reportedly fabricated quotes and a nonexistent "security director." The article was reportedly created with an external AI tool and was withdrawn the same morning. Telenor alerted NTB to the inaccuracies, and NTB apologized, citing improper AI use and initiating internal review of its newsroom practices.
Entities
View all entitiesAlleged: Unspecified large language model developer developed an AI system deployed by NTB, which harmed Truth , Telenor , Journalistic integrity , General public of Norway , General public and Epistemic integrity.
Alleged implicated AI system: Unspecified large language model
Incident Stats
Incident ID
1265
Report Count
1
Incident Date
2025-10-28
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
Early in the morning of October 15th, something went seriously wrong for NTB; when the news agency published a news report about Telenor's annual security report that had been sent out that morning.
Shortly afterwards, the entire report was…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?