2025-06-05

Coordinated Influence Campaigns Exposed in Meta’s Q1 2025 Adversarial Report

Level: 
Strategic
  |  Source: 
Meta
Global
Share:

Coordinated Influence Campaigns Exposed in Meta’s Q1 2025 Adversarial Report

Meta has disclosed the disruption of three separate covert influence operations in its Q1 2025 Adversarial Threat Report, detailing activity traced back to actors in China, Iran, and Romania. These operations were removed before gaining meaningful traction across Meta's platforms, though each campaign had a targeted focus, fake account infrastructure, and political intent. The report details Meta's ongoing enforcement efforts against coordinated inauthentic behavior (CIB), defined as the use of fake accounts to manipulate public discourse for strategic purposes.

The China-based operation resulted in the takedown of 157 Facebook accounts, 19 Pages, one Group, and 17 Instagram accounts. According to Meta, the network targeted audiences in Myanmar, Taiwan, and Japan, using clusters of fake accounts posing as locals and posting content in English, Burmese, Mandarin, and Japanese. The content criticized opposition movements in Myanmar, claimed corruption among Taiwanese political leaders, and condemned Japan's ties with the United States. Meta noted that "About 7,800 accounts followed one or more of these Pages, around 25 users joined this Group, and about 700 users followed one or more of these Instagram accounts." Many profiles used AI-generated images, and operational links were found to previous influence campaigns Meta disrupted in 2022 and 2024.

Meta also dismantled a network originating in Iran, which focused on Azeri-speaking users in Azerbaijan and Turkey. The operation was comprised of 17 Facebook accounts, 22 Pages, and 21 Instagram accounts. Fake profiles impersonated female journalists and pro-Palestinian activists, inserting content into trending discussions by leveraging hashtags like #palestine, #gaza, #starbucks, and #instagram. As Meta explained, "Many of these accounts posed as female journalists and pro-Palestine activists." Meta linked this campaign to the STORM-2035 activity previously exposed by OpenAI and Microsoft. The operation gained moderate reach with 44,000 accounts following one or more Pages and 63,000 following Instagram accounts. It had a small ad spend of approximately $70 in U.S. and Canadian dollars.

The third campaign disrupted by Meta originated from Romania and aimed at domestic users. The network consisted of 658 Facebook accounts, 14 Pages, and two Instagram accounts, along with a broader presence on YouTube, TikTok, and X. The fake profiles, which posed as Romanian citizens posting about travel, sports, and local events, sought to engage with political discourse and drive users to off-platform sites. Despite extensive operational security, including proxy IPs and multi-platform consistency, the campaign had limited impact. Meta confirmed that "The majority of these comments received no engagement from authentic audience." Still, the operation invested significantly in ad buys, spending around $177,000 primarily in U.S. dollars.

These enforcement actions reinforce the persistent threat posed by foreign and domestic influence campaigns. While Meta successfully removed these networks before they could build authentic audiences, the use of generative AI, fake personas, and cross-platform coordination reflects evolving tactics among threat actors. Meta has published threat indicators from each operation to support broader industry research and enable faster detection.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now