Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4980

Associated Incidents

Incident 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation

Loading...
Russian Disinformation 'Infects' Popular AI Chatbots
pcmag.com · 2025

A Russia-based disinformation network has successfully "infected" many of the world’s most popular AI chatbots with pro-Kremlin misinformation, according to a new report by NewsGuard.

Rather than targeting readers with propaganda directly, the network reportedly publishes millions of articles in different languages, pushing its narratives across the web, hoping they will be incorporated as training data used by large language models (LLMs) like OpenAI’s ChatGPT or xAI’s Grok. NewsGuard dubbed this practice "AI grooming."

The pro-Kremlin network, known as Pravda (Russian for truth), began shortly after the Russian invasion of Ukraine in 2022 and has gradually increased in scale to roughly 150 websites.

NewsGuard audited 10 of the most popular AI chatbots: OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s Le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. NewsGuard queried the chatbots about 15 pro-Russia narratives that have been advanced by a network of Pravda's websites since the start of the war.

For example, NewsGuard claims that four out of the 10 chatbots that were evaluated regurgitated claims that members of the Ukrainian Azov Battalion burned effigies of President Trump, citing articles from the disinformation network as their sources.

Other false claims the Pravda network spread that NewsGuard used in this analysis included French police saying that an official from Ukraine President Zelensky’s Defense Ministry stole $46 million and that Zelensky personally spent 14.2 million euros of Western military funding to buy a famous German countryside retreat frequented by Adolf Hitler.

The disinformation network managed to effectively influence many of these mainstream chatbots with barely any organic reach. Pravda-en.com, an English-language site within the network, only averaged 955 monthly unique visitors.

However, the operation focused on saturating search results with a huge volume of content. The report by the American Sunlight Project (ASP) found that, on average, the network publishes 20,273 articles every 48 hours, or roughly 3.6 million a year.

But the impact of Russian disinformation varied widely depending on which chatbot researchers looked at. One chatbot cited information 55% of the time after being presented with the false narratives, while another did so just over 6% of the time. (NewsGuard didn’t reveal which particular chatbot was behind each result.)

The highest levels of Russian leadership have already openly discussed the importance of controlling the narratives of AI models and search engines.

Russian President Vladimir Putin said in a 2023 conference that AI "created in line with Western standards and patterns could be xenophobic" and that "Western search engines and generative models often work in a very selective, biased manner."

Online Russian disinformation is nothing new, but AI is being used in increasingly creative ways for propaganda. OpenAI has highlighted Chinese linked accounts using ChatGPT to produce propaganda articles from scratch for publication in mainstream Latin American newspapers.

Meanwhile, many legitimate publications have blocked LLMs from scraping their stories, arguing that they’re making money from stolen content. Some, like The New York Times, have filed suit. Others have secured deals with AI companies to use their information.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd