2025-05-22

Fake Voices, Real Threat: FBI Flags Rise in Deepfake Vishing Attacks

Level: 
Strategic
  |  Source: 
IC3
Global
Share:

Fake Voices, Real Threat: FBI Flags Rise in Deepfake Vishing Attacks

The FBI issued a public service announcement on May 15, 2025, warning of an ongoing campaign using AI-generated voice deepfakes to impersonate senior U.S. government officials. The attacks, active since April, involve malicious actors targeting current and former federal and state officials and their contacts. According to the FBI, “Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts. If you receive a message claiming to be from a senior US official, do not assume it is authentic.” The attackers combine “smishing” (SMS phishing) and “vishing” (voice phishing) techniques to build trust and transition victims to other platforms, often through malicious links that enable unauthorized access to personal or official accounts.

Once access is obtained, threat actors are using compromised accounts and contact lists to further their campaigns, leveraging stolen trust to extract sensitive data or financial resources from new targets. "Access to personal or official accounts operated by US officials could be used to target other government officials, or their associates and contacts, by using trusted contact information they obtain. Contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds," warns the FBI. Attackers have increasingly employed AI tools to generate realistic voice clones, making it harder for recipients to distinguish between genuine and fraudulent communications. Notably, voice calls from IP addresses linked to popular messaging services have been observed, possibly used to auto-generate interaction URLs or voice clips for social engineering purposes. The campaign bears resemblance to prior incidents, including a 2024 case where attackers used a deepfake to impersonate a corporate CEO.

To defend against these evolving threats, the FBI advises individuals to verify all communications—especially those claiming to be from known figures—through a second, trusted channel. Signs of fraudulent contact may include minor variations in contact details, mismatched speech tone or timing in voice calls, and subtle visual inconsistencies in multimedia content. Users are urged not to click links or download attachments without verifying the sender’s identity and to avoid sharing personal information or two-factor authentication codes over digital platforms. Enabling multi-factor authentication, using a secret phrase for identity verification, and involving security officials or law enforcement when in doubt are also among the FBI’s recommendations.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now