Chinese Military Utilizes AI to Enhance Disinformation Campaigns: Think Tank

A new report from RAND reveals ‍that the ⁤Chinese military ​has embraced the​ use of artificial intelligence (AI) to further foreign influence campaigns. The study, published‍ on October 1, emphasizes that the United States and other countries should take measures to prepare for this AI-driven ‌social media manipulation. These measures include adopting risk‍ reduction strategies, promoting media literacy and‍ government trustworthiness, increasing public reporting, and‌ enhancing diplomatic ⁢coordination.

The report focuses on ‍the planning​ and strategies behind the Chinese Communist Party’s⁢ (CCP) social ⁢media influence campaigns. Researchers investigated Li Bicheng, a‌ leading expert ​on mass social media manipulation affiliated with the Chinese military. They found evidence‌ from over ‌220 Chinese language articles⁣ and ⁢more than 20 English language ‌articles written by Li.

Key findings of the‌ study indicate that the CCP began developing social ​media manipulation ​capabilities in the ‌2010s and is interested in leveraging⁣ AI for these campaigns.⁤ It ‍also highlights that ‌Chinese ‌military researchers are ​conducting cutting-edge work ⁤in⁤ this field and that the CCP⁤ is well-positioned to run large-scale manipulation campaigns.

Interestingly, despite⁣ publicly‍ opposing using AI for disinformation purposes, CCP’s activities contradict its ⁢official statements. Initially cracking ⁤down on social media⁤ during ​uprisings like ⁢Arab Spring, it later took an interest in Western uses ‍of “online psychological warfare.” In 2013, planning documents were released to strengthen international communications capabilities and construct foreign discourse power.

By 2017-2018, state-sponsored efforts actively targeted foreign⁢ groups⁢ through disinformation ‍campaigns like “Spamouflage.” Social media disinformation campaigns​ escalated by 2019 during events such as Hong Kong⁢ protests, COVID-19 pandemic, and U.S. midterm ⁤elections in 2022.

Li​ Bicheng’s research ​focused⁣ on automating various steps involved in creating intelligent posts ‌tailored to‍ target audiences using AI technology. CCP-sponsored researchers are also developing ‌a simulated environment called “supernetwork” ‌to test these AI capabilities effectively.

The report concludes by stating that if CCP⁢ has already been working on these technologies ‍discussed in previous studies from 2023 onwards; it would be​ ready to target⁢ upcoming presidential elections using⁢ AI-generated content. Evidence of CCP-linked⁢ AI-generated disinformation campaigns has already surfaced in topics such as‍ a “U.S.-China tech war” on ‍platforms like youtube.

RAND researchers recommend investing in ways to detect⁢ and label AI-generated content as a means of reducing risk associated with social media bots ⁣utilizing generative AI technology going forward.

Share:

Leave the first comment

Related News