In a previously undisclosed initiative, American government officials have been evaluating Chinese artificial intelligence (AI) systems for signs of ideological conformity with the Chinese Communist Party (CCP), according to an internal memo obtained by Reuters. The effort, involving both the State and Commerce Departments, aims to analyze how closely Chinese-developed large language models (LLMs) adhere to state propaganda.
Testing for Political Alignment
The memo reveals that U.S. agencies have developed a methodology to test these AI tools by posing a standardized set of politically sensitive questions in both Chinese and English. Officials then assess how the AI systems respond—whether they answer directly, deflect, or echo Beijing’s official narratives.
Among the models tested are Alibaba’s Qwen 3 and DeepSeek’s R1, both of which reportedly show a high degree of alignment with CCP-approved viewpoints. Responses often include phrases extolling China’s “social stability” or avoiding topics like the Tiananmen Square crackdown, Uyghur repression, and territorial disputes in the South China Sea.
One particularly telling example cited in the memo noted how DeepSeek’s model reflexively praised Beijing’s commitment to “social harmony” when asked about controversial events, while avoiding direct engagement with criticism.
Growing Concerns Over Ideological Censorship in AI
The findings underscore growing U.S. concerns that China is engineering its AI tools to serve as extensions of state ideology. Each successive version of the tested Chinese models reportedly exhibits increased censorship, suggesting that developers are placing greater emphasis on ensuring political compliance.
A U.S. State Department official quoted in the memo suggested that results of the evaluations might be made public in the future, to highlight the potential global risks of ideologically biased AI technologies developed by geopolitical rivals.
Chinese Response and Broader Implications
While China’s government has made no secret of its desire to align AI with its “core socialist values,” the Chinese Embassy did not directly address the memo when contacted. Instead, it reiterated Beijing’s ongoing efforts to craft an AI governance framework that balances development and national security.
The memo’s revelations come at a time of escalating global scrutiny over how AI systems may be manipulated to shape public perception. The ideological slant of AI is not a uniquely Chinese issue: U.S. tech companies have also faced criticism over bias in their own models.
For example, Elon Musk’s xAI chatbot, Grok, recently came under fire for generating antisemitic content and extremist rhetoric. Musk claimed the company was “actively working” to remove offensive material. Shortly after, Linda Yaccarino, CEO of Musk’s social media platform X, announced her resignation without explanation.
Global Stakes in the AI Information War
As the influence of AI systems expands into education, media, and political discourse, experts warn that ideological manipulation through AI could become a powerful tool of statecraft. The U.S.’s quiet testing of Chinese models highlights how the AI race is not just technological — it’s also deeply ideological.
The implications are global: unchecked, AI tools may propagate government propaganda across borders, shaping opinions not through overt censorship but algorithmic suggestion. In this evolving battle for influence, the line between machine learning and message control is becoming increasingly blurred.