Five major AI chatbots flagged for repeating Chinese Communist Party propaganda
Five popular AI models—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok—all showed bias toward Chinese Communist Party (CCP) viewpoints and censored material deemed sensitive by Beijing. Only one model, DeepSeek-R1, originated in China.
The American Security Project, a US-backed think tank, released the findings Wednesday in a new report.
The issue started when researchers asked the chatbots about topics sensitive to China, like the June 4, 1989 Tiananmen Square massacre, in English and Chinese. Most models used neutral or Beijing-approved language—avoiding words like “massacre” or explicit mention of victims and perpetrators. For example, DeepSeek and Copilot used “The June 4th Incident” instead of massacre, mirroring official CCP terminology.
Microsoft’s Copilot was the most likely to present CCP propaganda as factual. X’s Grok was the most critical of China’s narratives.
Courtney Manning, lead author and director of AI Imperative 2030 at the American Security Project, told The Register that AI models “internalize CCP propaganda… putting it on the same credibility threshold as true factual information.”
Microsoft did not respond to requests for comment.
The report highlights that AI models don’t “understand truth,” but generate the statistically most probable answers based on training data. This data includes CCP documents and official language, leading to unintentional bias.
Courtney Manning stated:
So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see.
The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment,
but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives.We’re going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we’re training these models to begin with.
The American Security Project used VPNs in Los Angeles, New York, and Washington DC to test these models. They initiated fresh chats for each prompt and analyzed outputs for CCP-aligned messaging.
Manning warns the wider AI ecosystem needs to improve data curation. Current fixes after training aren’t enough.
She added:
In the absence of a true barometer – which I don’t think is a fair or ethical tool to introduce in the form of AI – the public really just needs to understand that these models don’t understand truth at all.
We should really be cautious because if it’s not CCP propaganda that you’re being exposed to, it could be any number of very harmful sentiments or ideals that, while they may be statistically prevalent, are not ultimately beneficial for humanity in society.