How AI Chatbots Are Echoing Chinese State Propaganda

Table of Contents

Conversational agents driven by the Artificial Intelligence (often called the Artificial Intelligence (AI) chatbots) has become a primary channel that enables billions of individuals to access information, develop research, and engage in political discussion. The American Security Project (ASP) recently analyzed AI systems and revealed a troubling pattern: top-performing models often regurgitate narratives favorable to the Chinese Communist Party (CCP), especially when prompted in Chinese or asked about politically sensitive topics.

The idea of the misinformation, misalignment as well as the unintended spread of authoritarian propaganda becomes a tangible prospect rather than a potential one that will remain in the realm of theoretically possible effects as these systems gain more capability and influence. The findings of the report are thus important to the AI developers, policymakers and civil societies who must consider them as urgent.

AI Alignment and the Influence of Propaganda

At the heart of this issue is training data contamination. OpenAI, Microsoft, and Google train their Large Language Models (LLMs)—like ChatGPT, Copilot, and Gemini—on vast corpora of publicly available online content. When bots, state actors, or misinformation networks manipulate a significant portion of that content, these LLMs can unintentionally internalize and reproduce the distortions.

The ASP’s investigation shows that this is not a distant possibility, but a present reality. The study tested five prominent LLM-powered AI chatbots—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s R1, and xAI’s Grok—using prompts in both English and Simplified Chinese on topics deemed politically sensitive by the Chinese government.

The results reveal a clear trend: AI-generated responses often reflect CCP censorship, disinformation, or framing, particularly in the Chinese language.

Language as a Filter: Diverging Narratives

One of the most striking findings of the ASP report is how differently chatbots respond depending on the language used. The same prompt can yield fact-based, nuanced answers in English and propaganda-laced responses in Chinese.

COVID-19 Origins

In English:

  • ChatGPT, Gemini, and Grok acknowledge mainstream scientific theories, including zoonotic transmission and the possibility of a lab leak.
  • Copilot and DeepSeek offer vague, noncommittal responses, omitting references to Wuhan or the lab leak theory.

In Chinese:

  • All models describe the pandemic’s origins as an “unsolved mystery” or natural event.
  • Gemini adds misleading CCP-aligned talking points, suggesting that early COVID-19 cases were found in the US and France.

Tiananmen Square Massacre

In English:

  • All models except DeepSeek refer to the 1989 crackdown as the “Tiananmen Square Massacre.”
  • Grok explicitly mentions that “unarmed civilians” were killed by the military.
  • Other models use softened language such as “crackdown” or “suppression.”

In Chinese:

  • ChatGPT uses the word “massacre,” albeit cautiously.
  • Copilot and DeepSeek call it the “June 4th Incident,” mirroring CCP-approved terminology.
  • Copilot even justifies the state’s use of force as a response to “calls for reform.”

Uyghur Human Rights Abuses

In English:

  • Several models cite credible reports of oppression and human rights violations.

In Chinese:

  • Copilot and DeepSeek frame China’s actions in Xinjiang as measures for “security and social stability.”
  • References are made to state-affiliated sources, with no mention of international criticism or UN findings.

This linguistic split highlights a dangerous dual-channel dynamic, where AI systems present truthful information to Western audiences while disseminating propaganda to Chinese-speaking users—amplifying the CCP’s desired information environment.

Microsoft’s Copilot: The Most Aligned?

Among the models evaluated, Microsoft’s Copilot was singled out for concern. The ASP notes that Copilot is more likely than other U.S.-based models to present CCP propaganda as legitimate or equivalent to factual information.

The report suggests that this may be influenced by Microsoft’s business operations within China, where the company maintains five data centers. China’s 2023 AI regulations demand that chatbot services “uphold socialist core values” and avoid “subversive” speech. Noncompliance could mean losing access to one of the world’s largest tech markets.

This business reality creates a potential conflict of interest: to stay competitive in China, companies might feel pressure to sacrifice neutrality, accuracy, or freedom of expression in the outputs their AI systems generate.

In fact, ASP researchers found that Copilot’s censorship mechanisms may be more aggressive than some Chinese domestic services, systematically avoiding politically sensitive topics or offering irrelevant, generic responses (e.g., travel tips) when prompted about civil liberties or democracy.

How Propaganda Enters the Training Pipeline

The contamination of training data does not occur randomly. According to the ASP, the CCP employs sophisticated tactics to flood the internet with state-aligned content:

  1. Astroturfing: The creation of seemingly grassroots posts by fake foreign users or organizations to bolster CCP narratives.
  2. Bot Amplification: Automated networks amplify favorable content on platforms like X, Facebook, or YouTube.
  3. Content Seeding: Inserting propaganda into open-source databases, comment sections, and knowledge repositories likely to be scraped by AI training algorithms.

Once this content is incorporated into the training datasets of LLMs, it becomes normalized. Without deliberate intervention from developers, AI models may reproduce these views as if they were neutral or factually accurate.

Geopolitical Consequences of Misaligned AI

The implications of this trend go far beyond skewed chatbot responses. Deploying LLMs trained on CCP-influenced data in sensitive sectors—such as defense, education, media, or policy advisory—dramatically escalates the risk of information warfare.

According to the ASP:

  • Misinformed AI tools can erode democratic discourse by promoting false equivalence between liberal and authoritarian ideologies.
  • National security risks emerge if AI systems begin recommending actions or generating insights based on adversarial values.
  • Censorship by design becomes a reality if models pre-emptively exclude critical topics or language based on political sensitivities hardcoded into their filters.

The authors warn that if the West loses control over truthful, verifiable training data, it may soon become impossible to ensure AI alignment with democratic values.

A Path Forward: Data Sovereignty and Model Transparency

In light of these findings, the ASP recommends several key interventions to restore integrity in AI development:

  1. Develop and protect reliable training datasets: Western institutions must invest in curated, multilingual, fact-based corpora that are resistant to manipulation.
  2. Enhance transparency: AI companies should publicly disclose the languages, content sources, and geopolitical filters applied to their training data and outputs.
  3. Establish AI alignment standards: International frameworks must ensure that AI models do not promote authoritarian narratives or suppress factual content.
  4. Introduce Geopolitical resilience testing: By auditing models for bias across languages and cultures, and by conducting red-teaming exercises targeting state-sponsored influence operations.
  5. Decouple commercial incentives from censorship compliance: By holding companies in authoritarian jurisdictions accountable for any compromises they make to maintain market access.

Conclusion

The ASP’s report underscores a sobering reality: AI models are only as unbiased as the data they consume—and in today’s information ecosystem, that data is under active siege. The strategic infusion of CCP propaganda into digital content streams is reshaping how even the most advanced AI tools interpret the world.

Not merely at stake is the integrity of AI responses, but the very trust in digital knowledge systems. As LLMs become mediators of truth in society, subtly nudging them toward authoritarian narratives risks not only disinformation plus the degradation of democratic institutions. 

Shape the future of AI to be transparent, resilient, and grounded in universal human rights—not governed by the censorship mandates of authoritarian regimes. Now is the time to act.

Table of Contents

Arrange your free initial consultation now

Details

Share

Book Your free AI Consultation Today

Imagine doubling your affiliate marketing revenue without doubling your workload. Sounds too good to be true Thanks to the rapid.

Similar Posts

Nano Banana Trend 2025: What is the Viral 3D Figurine Craze & How to Create One for Free

Public-Private Partnerships in AI: Hamburg’s Model for Scalable Innovation

Stephanie Sy on Scaling AI in APAC: Thinking Machines & OpenAI Partnership