Description: On June 25, 2025, the American Security Project produced a report outlining how several major U.S. LLMs, including ChatGPT, Microsoft Copilot, Google Gemini, and Grok, sometimes generated responses aligned with Chinese Communist Party propaganda or censorship when prompted in English and Simplified Chinese on sensitive topics. The study also found similar patterns in the Chinese-developed DeepSeek-R1 model.
Entities
View all entitiesAlleged: xAI , OpenAI , Microsoft , Google , DeepSeek , Grok , Gemini , Copilot and ChatGPT developed and deployed an AI system, which harmed Pro-democracy activists , Policymakers , Grok users , General public , Gemini users , Epistemic integrity , Democracy , DeepSeek users , Copilot users , Civil society groups , ChatGPT users and National security and intelligence stakeholders.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
AIID editor's note: See the original source for the full PDF report.
The Chinese Communist Party’s aggressive censorship laws and disinformation campaigns have resulted in a proliferation of propaganda and censorship across the global AI da…
Loading...
Leading AI chatbots are reproducing Chinese Communist Party (CCP) propaganda and censorship when questioned on sensitive topics.
According to the American Security Project (ASP), the CCP's extensive censorship and disinformation efforts hav…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?