2024-06-13

OpenAI Exposes Use of AI in State-Backed Influence Campaigns

Level: 
Strategic
  |  Source: 
OpenAI
Global
Share:

OpenAI Exposes Use of AI in State-Backed Influence Campaigns

A trend in the use of AI-driven tools by threat actors affiliated with the governments of Russia, China, and Iran, aiming to influence public opinion and political outcomes across the globe was identified by OpenAI. According to the report, these actors have integrated AI tools into their operations to enhance the effectiveness and reach of their campaigns. However, despite their efforts, no operation has significantly engaged authentic audiences, with all campaigns scoring no higher than a 2 out of a maximum of 6 on OpenAI’s Breakout Scale. The actors employed a blend of AI-generated content with traditional formats, such as manually written texts and internet memes, to craft and disseminate their messages. "All of these operations used AI to some degree, but none used it exclusively," the report stated. This approach shows a coordinated but not fully automated strategy, leveraging technology to potentially maximize the impact of their narratives without relying solely on AI.

The "Bad Grammar" operation from Russia, primarily active on Telegram, targeted regions like Ukraine, Moldova, the Baltic States, and the US, generating politically themed content in both English and Russian to sow discord and influence public opinion. Despite these efforts, engagement was minimal, suggesting the content did not resonate widely or was perceived as inauthentic. Similarly, the "Doppelganger" operation focused on disseminating anti-Ukraine propaganda across various internet platforms using AI to produce multilingual content, including memes and text posts. Like "Bad Grammar," the engagement levels for "Doppelganger" were low, underscoring the difficulties these actors face in gaining broader traction.

"Spamouflage," a persistent Chinese threat actor, focused on enhancing China’s image and criticizing its critics across several online platforms, using AI extensively for content creation. This included generating favorable articles and social media posts about China's policies and its critics. Despite sophisticated content generation, the campaign failed to penetrate target audiences, remaining largely within its echo chambers.

The "International Union of Virtual Media" (IUVM), linked to Iranian interests, generated web content to support Iran’s geopolitical stances and criticize Western policies, especially those of Israel and the US. Distributed primarily on its platforms, IUVM's content had limited outreach beyond its existing follower base. Meanwhile, the "Zero Zeno" operation, run by the Israeli commercial company STOIC, generated content on multiple platforms focusing on issues like the Gaza conflict and political situations in various regions, including India. This operation also saw minimal genuine engagement, reflecting the broader trend of minimal impact despite the use of advanced AI tools in content generation.

These case studies highlight a critical insight: while AI can improve the quality and volume of content production in influence operations, the lack of authentic engagement indicates that merely increasing output does not necessarily increase influence. OpenAI’s ongoing efforts to disrupt these activities involve enhancing detection capabilities, sharing threat intelligence with industry peers, and refining AI models to resist misuse, aiming to protect public discourse from deceptive practices. This collaborative defense strategy enhances the collective ability to counteract the malicious use of AI in global influence operations. OpenAI emphasizes the importance of transparency and accountability in their operations, regularly publishing reports to inform both users and the wider public about the threats and their countermeasures.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now