China is intensifying the use of advanced technology, such as artificial intelligence (AI), in its military and digital disinformation strategies to consolidate its global political influence. The potential impact on global security has international experts sound alarm bells.
“China is innovating in accessing information from other countries, with the intention of consolidating itself as a power capable of exercising dominance and obtaining strategic benefits, whether military, economic, or political,” Víctor Ruiz, founder of the SILIKN cybersecurity center in Mexico, told Diálogo. “This ambition is reflected in its militarized approach to technology.”
Scientists from institutions linked to the People’s Liberation Army (PLA) used the initial version of Meta’s Llama AI model to create ChatBIT, a military intelligence-focused AI tool, Reuters reported. Designed to process and analyze critical information, ChatBIT was found to outperform other AI models and be nearly as capable as ChatGPT-4.
“It’s the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open source LLMs [large language models], especially those of Meta, for military purposes,” Sunny Cheung, an associate fellow at the Jamestown Foundation, who specializes in China’s emerging and dual-use technologies, including AI, told Reuters.
While Meta openly releases AI models such as Llama, it imposes restrictions to prevent military, warfare, and espionage use. Its public nature, however, makes it difficult to control. Google reiterated to Reuters that any use by the PLA would be unauthorized and violate its acceptable use policies, the news agency reported.
The Aviation Industry Corporation of China, linked to the PLA, has used Llama 2 for the training of airborne electronic warfare interference strategies, Reuters indicated, adding that the AI model was also used for domestic security, specifically in police surveillance, to process large volumes of data and improve police decision-making.
From heist to innovation
Initially, China duplicated Western hardware and software, adapting them for domestic use without the need for a development department. Subsequently, it marketed them in other countries, making them accessible and, in many cases, installing spying and tracking equipment, a practice that is still in force, Ruiz said.
“With the advancement of AI, China has expanded its strategy, using technologies such as Llama and other AI models for disinformation purposes. This technological evolution poses significant risks, as Chinese campaigns do not operate in isolation, but are interconnected,” Ruiz added. “AI accelerates data analysis, which facilitates the identification of key targets, such as critical infrastructure and military capabilities.”
“In conflict scenarios it could adopt tactics similar to Russia’s in Ukraine, attacking telecommunications to block, spy, and spread false information. This would confuse the enemy about its real capabilities,” Ruiz continued. “AI amplifies these operations, simulating vulnerabilities or strategic strengths, and deploying disinformation campaigns that alter the military landscape.”
Glassbridge
In 2024, Google blocked hundreds of sites and domains associated with Glassbridge, a group made up of four Chinese companies, singled out for operating fake news networks as part of an influence campaign. More than 1,000 sites associated with this operation have been blocked since 2022 for disseminating strategic narratives disguised as legitimate content, U.S. news site The Record News reported.
These sites posed as independent media, adopting a local approach, and modifying their messages to influence specific regional audiences. The objective was to present disinformation disguised as credible news, a strategy that researchers also identify in Russian and Iranian disinformation campaigns.
“These sites represent another way in which China can execute strategic attacks. Far from diminishing, it is foreseeable that their number will increase, as the needs of the Chinese government intensify,” Ruiz said. “To do this, they use small companies that offer hosting and create websites posing as independent media.”
The sites created hundreds of fake domains that replicated pro-China content, such as articles from state-owned Global Times and topics related to territorial claims in the South China Sea, Taiwan, Falun Gong, and other sensitive locations.
The sites targeted strategic regions such as Eastern Europe, the Middle East, Africa, Asia, the United States, and Chinese diaspora communities, presenting themselves as local sources of information. In addition, companies were modifying and sharing content among multiple networks of sites, reinforcing narratives aligned with People’s Republic of China interests.
“In addition, many traditional media outlets are reproducing news coming from these sources, which creates a perception of legitimacy among the audience,” Ruiz said. “The lack of tools to verify their authenticity, coupled with the use of misleading AI-generated photos, data, and statistics, makes it difficult to distinguish whether they are false or true.”
Paperwall, a new threat
“Beijing is increasing its aggressive activities in the spheres of influence operations (IOs), both online and offline. In the online realm […], Chinese IOs are shifting their tactics and increasing their volume of activity,” Citizen Lab, a cybersecurity laboratory of the University of Toronto, indicated in a report, about a China-coordinated influence campaign, Citizen Lab dubbed Paperwall.
The Paperwall websites pose as local news outlets, pretending to be local news media in countries of Europe, Asia, and Latin America. In February 2024, Google removed from Google News more than 100 Paperwall sites. The sites combined copied articles, Chinese state content, press releases, and conspiracy theories.
Researchers have warned about a new concept called Pink Slime, in which the internet is flooded with fake news sites, publishing largely generated AI content or propaganda from countries such as Russia, China, and Iran. “The issue has been exacerbated by the emergence of tools like ChatGPT and the decimation of legitimate local journalism,” The Record News reported.
“This poses a challenge for local media, who need to innovate their journalistic practices and intensify their investigations, to counter the massive and fast-spreading misinformation that appeals to users. In addition, it is crucial for the public to exercise common sense and adopt an analytical and critical approach to the information they consume,” Ruiz concluded.