OpenAI Disrupts Influence Operations Linked to China, Russia, and Others

OpenAI, an artificial intelligence research laboratory consisting of the world’s top researchers and engineers in AI, has disrupted five influence operations originating from four countries. The operations were using AI tools to manipulate public opinion and shape political outcomes across the internet. While the operations did not achieve their goals, meaningfully increasing their audience engagement, OpenAI admits that the network’s level of sophistication and techniques used swiftly highlights the need to be vigilant against such attacks.

One of the four foreign influence operations that OpenAI disrupted is called Spamouflage and it primarily centers on Chinese communist propaganda. Much of the content generated by the network is devoted to praising the Chinese Communist Party and criticizing the United States government on multiple social platforms such as X, Medium and Blogspot. OpenAI found that in 2023, the Chinese operation generated articles that falsely attributed the pollution of the environment to Japan by releasing wastewater from the Fukushima nuclear power plant. The network did not stop at poor information, it sought dissidents like Richard Gere and Chinese dissident Cai Xia to silence.

Furthermore, the Spamouflage network used OpenAI models to debug code and generate content for a Chinese-language website that attacks Chinese dissidents, calling them “traitors.” Through further investigation led by OpenAI, the company was able to determine that the network responsible for the dissemination of disinformation is based in China.

OpenAI reported that it also disrupted a previously unreported Russian network called Bad Grammar. The group operates mainly through the messaging app Telegram and focuses on Ukraine, Moldova, the United States, and the Baltic States. The group used OpenAI tools to debug code for a Telegram bot that automatically posted information on this platform. The content they generated consisted of short political comments in Russian and English about the Russia-Ukraine war as well as US politics.

The other two foreign interference campaigns detected by OpenAI had originated from Iran and Israel. The Israel campaign was particularly technologically sophisticated, using ChatGPT AI to generate articles for disinformation programs while publishing the content on multiple social platforms, including X, Facebook, and Instagram. OpenAI has tracked the Iranian operation to a website related to the Iran threat actor website.

OpenAI had launched a chatbot called ChatGPT to the public in November 2022, which swiftly became a global phenomenon, attracting hundreds of millions of users impressed by its ability to answer questions and engage across a wide array of topics. It did not take long for groups who wanted to leverage the technology to propagate their propaganda to emerge. OpenAI’s response is a powerful message and a step forward in the fight against electoral and political interference, especially through the use of artificially intelligent agents that can give rise to machines becoming the masters of their creators

Share:

Hot News