Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4977

Associated Incidents

Incident 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation

Loading...
Russian Propaganda Has Now Infected Western AI Chatbots — New Study
forbes.com · 2025

A troubling trend in which most western AI chatbots are unknowingly spreading Russian propaganda has been uncovered by a NewsGuard audit published last week. A Moscow-based disinformation network known as "Pravda" --- the Russian word for "truth" --- has been flooding search results and web crawlers with pro-Kremlin falsehoods, causing AI systems to regurgitate misleading narratives. The Pravda network, which published 3.6 million articles in 2024 alone, is leveraging artificial intelligence to amplify Moscow's influence at an unprecedented scale.

The audit revealed that 10 leading AI chatbots repeated false narratives pushed by Pravda 33% of the time. Shockingly, seven of these chatbots directly cited Pravda sites as legitimate sources. In an email exchange, NewsGuard analyst Isis Blachez wrote that the study does not "name names" of the AI systems most susceptible to the falsehood flow but acknowledged that the threat is widespread.

"Our research suggests Russian influence operations are shifting methods towards the manipulation of AI models rather than directly targeting human readers, and are doing so to reach a much larger scale," Blachez explained.

Russia "has adopted new strategies so that anyone using AI models is vulnerable to encountering Russian propaganda. What makes this particularly concerning is that this manipulation is subtle and not easily visible --- it's an underlying problem that operates behind the scenes, affecting AI responses without users being aware of it," she added.

A New AI Threat --- LLM Grooming

One of the most alarming practices uncovered is what NewsGuard refers to as "LLM grooming." This tactic is described as the deliberate deception of datasets that AI models --- such as ChatGPT, Claude, Gemini, Grok 3, Perplexity and others --- train on by flooding them with disinformation.

By signing up, you agree to receive this newsletter, other updates about Forbes and its affiliates' offerings, our Terms of Service (including resolving disputes on an individual basis via arbitration), and you acknowledge our Privacy Statement.

Blachez noted that this propaganda pile-on is designed to bias AI outputs to align with pro-Russian perspectives. Pravda's approach is methodical, relying on a sprawling network of 150 websites publishing in dozens of languages across 49 countries.

"LLM grooming refers to the intentional manipulation of the data from which AI models train from, ultimately altering their outputs in the goal of advancing a perspective, in this case Russian propaganda. The Pravda network does this by publishing falsehoods en masse, on many web site domains, to saturate search results, as well as leveraging search engine optimization strategies, to contaminate web crawlers that aggregate training data," she wrote.

She added that the scale and sophistication of this effort make it difficult for AI companies to simply block Pravda sites without risking the continued infiltration of false narratives through other channels.

The report cites the example of a false claim that Ukrainian President Volodymyr Zelenskyy banned U.S. President Donald Trump's Truth Social app in Ukraine. Seven of the 10 chatbots repeated this falsehood, with some directly citing Pravda articles as sources. This type of misinformation laundering exploits a fundamental weakness in AI --- its inability to distinguish between reliable and unreliable sources without substantial guardrails.

What This Means For AI Users and Companies

The implications of this disinformation deluge are considerable. As AI tools become more integrated into everyday life, the risk for foreign operators to shift and reframe public perception increases proportionately. Blachez warns that AI companies must respond to this threat to prevent their platforms from becoming overrun with propaganda pablum.

"Without proper safeguards in place, AI companies are at risk of serving the Kremlin's disinformation goals," she said.

Users also have a role to play in defending against misinformation. Blachez advises individuals to cross-check information generated by AI, particularly on sensitive or news-related topics. Tools like NewsGuard's Misinformation Fingerprints, which catalog provably false claims, can help users identify and avoid unreliable sources.

The Growing Threat Of AI Infiltration

Blocking Pravda domains alone won't fix the issue because the network is constantly expanding, with new domains and subdomains with new names emerging regularly. According to Blachez, the sheer volume of content --- more than 20,000 articles every 48 hours --- makes it nearly impossible for AI models to avoid ingesting false narratives.

"Simply filtering out Pravda sites will not solve the issue at stake, given that the network is constantly growing," she said.

The NewsGuard report highlights the urgency for AI companies to adopt more robust verification and content-sourcing practices. As Blachez pointed out, the risk extends beyond political misinformation. The techniques the Pravda network is pioneering could be adapted by other malicious actors to manipulate AI models on a range of issues, from financial markets to health information.

The threat is clear. If AI companies fail to act, the tools they designed to democratize information could become channels for disinformation at a global scale.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd