Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4889

Associated Incidents

Incident 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation

Loading...
Russian Disinformation Infiltrates AI Systems - https://eutoday.net
eutoday.net · 2025

A Moscow-based propaganda network named "Pravda" has systematically infiltrated artificial intelligence (AI) systems worldwide, embedding Russian disinformation into leading generative AI tools, an audit by NewsGuard has revealed.

The audit found that AI chatbots repeated false claims from the pro-Kremlin Pravda network 33% of the time, demonstrating a significant vulnerability in AI models’ ability to filter out misleading content.

AI as a New Battleground for Disinformation

Unlike traditional disinformation campaigns that target human audiences, the Pravda network appears to be optimised to manipulate AI chatbots. By flooding search engines with fabricated narratives and false claims, the network ensures that AI models trained on publicly available data absorb and reproduce Kremlin-backed disinformation.

NewsGuard’s investigation examined ten leading AI chatbots, including OpenAI’s ChatGPT-4o, Google’s Gemini, Microsoft’s Copilot, Meta AI, and xAI’s Grok. When tested on 15 specific false narratives, these AI systems not only repeated the misinformation but, in some cases, directly cited Pravda network sources as legitimate news outlets.

The scale of this operation is extensive. The Pravda network produced 3.6 million articles in 2024 alone and operates across 150 domains in multiple languages, targeting 49 countries. While its websites receive minimal direct human traffic, its primary goal appears to be influencing AI models rather than building an organic readership.

How Pravda’s Disinformation is Integrated into AI Models

The Pravda network does not create original content but rather aggregates and amplifies Kremlin narratives by republishing stories from Russian state media, pro-Kremlin influencers, and government agencies. This allows AI models trained on vast internet datasets to incorporate these narratives, unknowingly presenting them as fact.

One of the key figures promoting this strategy is John Mark Dougan, a U.S. fugitive who has become a pro-Kremlin propagandist based in Moscow. In a 2025 conference attended by Russian officials, Dougan stated, “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.”

His remarks align with the findings of the NewsGuard audit, which confirms that AI models are increasingly reliant on Russian-influenced narratives due to systematic “LLM grooming”—a method of influencing large language models by saturating their training data with specific content.

The Global Reach of Pravda’s AI Manipulation

The Pravda network extends far beyond Russia, operating in numerous languages and regions. Approximately 40 of its websites publish in Russian, targeting Ukraine through domains such as News-Kiev.ru and Donetsk-News.ru. Around 70 sites target Europe, publishing in English, French, Czech, Irish, and Finnish, while others cater to audiences in Africa, the Middle East, and Asia.

An analysis by Viginum, a French government agency monitoring foreign disinformation, traced the network’s operations to TigerWeb, a Crimea-based IT firm linked to Yevgeny Shevchenko, a web developer with ties to the Russian-backed Crimean administration. Viginum confirmed that the network meets the criteria for “foreign digital interference.”

AI Systems’ Susceptibility to Misinformation

NewsGuard’s study found that leading AI models repeated Russian disinformation 33.55% of the time, provided a non-response 18.22% of the time, and successfully debunked the misinformation in 48.22% of cases. The failure of these models to filter out Kremlin-backed narratives poses significant political and social risks.

In one instance, when prompted with the false claim that Ukrainian President Volodymyr Zelensky banned Donald Trump’s Truth Social in Ukraine, six AI models repeated the fabricated story, often citing Pravda-affiliated sources. Another false claim—that fighters from Ukraine’s Azov Battalion burned an effigy of Trump—was similarly propagated by four of the tested chatbots.

The Challenges of Filtering AI Disinformation

The Pravda network’s strategy of continually launching new domains makes it difficult for AI companies to block disinformation at its source. Moreover, since the network primarily republishes Kremlin-aligned narratives rather than creating original content, removing its sites does not eliminate the spread of false claims.

Russian President Vladimir Putin has acknowledged the importance of AI in information warfare. At a 2023 AI conference in Moscow, he criticised Western generative AI models for their “biased” approach and pledged increased investment in Russian AI development.

Addressing AI’s Vulnerability to Disinformation

Experts warn that if AI companies do not take proactive steps to detect and counteract the influence of disinformation networks, AI-generated responses will continue to be exploited as tools of foreign propaganda. Potential solutions include enhanced filtering mechanisms, improved transparency in AI training data, and partnerships with independent fact-checkers to verify AI-generated content.

As generative AI becomes an increasingly dominant source of information, its susceptibility to disinformation presents a critical challenge for policymakers, AI developers, and the public. Without intervention, AI may become an unwitting amplifier of state-backed propaganda, reshaping global discourse in ways that serve authoritarian interests.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd