Incident 457: Article-Writing AI by CNET Allegedly Committed Plagiarism

Description: CNET's use of generative AI to write articles allegedly ran into plagiarism issues, reproducing verbatim phrases from other published sources or making minor changes to existing texts such as altering capitalization, swapping out words for synonyms, and changing minor syntax.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: unknown developed an AI system deployed by CNET, which harmed plagiarized entities and CNET readers.

Incident Stats

Incident ID
Report Count
Incident Date
Khoa Lam, Daniel Atherton
CNET pauses publishing AI-written stories after disclosure controversy · 2023

CNET will pause publication of stories generated using artificial intelligence “for now,” the site’s leadership told employees on a staff call Friday.

The call, which lasted under an hour, was held a week after CNET came under fire for its …

CNET Pauses AI-Written Articles to Let Backlash Die Down · 2023

CNET told staff it would halt the publication of articles generated via artificial intelligence, in a Friday call, according to a report from The Verge. Or, at least, the company said it would pause the AI-article practice “for now,” as it …

CNET's AI Journalist Appears to Have Committed Extensive Plagiarism · 2023

The site initially addressed widespread backlash to the bot-written articles by assuring readers that a human editor was carefully fact-checking them all prior to publication.

Afterward, though, Futurism found that a substantial number of e…

News Site Admits AI Journalist Plagiarized and Made Stuff Up, Announces Plans to Continue Publishing Its Work Anyway · 2023

This morning, _CNET _editor-in-chief Connie Guglielmo broke the site's lengthy silence on its decision to publish dozens of AI-generated articles about personal finance topics on its site. 

It appears to be the first time that anyone in the…

CNET's AI Plagiarism Debacle · 2023

Over the past two weeks, CNET has become the poster child of artificial intelligence (AI) gone wrong.

First, the website was caught publishing articles produced by an AI under the byline “CNET Money Staff” in an article written by Frank Lan…

CNET Cops to Error Prone AI Writer, Doubles Down on Using It · 2023

After getting caught using an algorithm to write dozens of articles, the tech publication CNET has apologized (sorta) but wants everybody to know that it definitely has no intention of calling it quits on AI journalism.

Yes, roughly two wee…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

Selected by our editors
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Biased Sentiment Analysis

Biased Sentiment Analysis

· 7 reports
Amazon Censors Gay Books

Amazon Censors Gay Books

· 24 reports