2025-05-22

Experts Warn of Imminent AI Agent Use in Real-World Cyberattacks

Level: 
Strategic
  |  Source: 
Axios
Global
Share:

Experts Warn of Imminent AI Agent Use in Real-World Cyberattacks

A growing concern among cybersecurity experts is the anticipated emergence of AI-enabled cyberattacks, a scenario predicted by Kevin Mandia during a recent conversation with Axios. Mandia warned that the use of generative AI in cyber operations could reach a tipping point within the next year, leading to incidents where the presence of AI in the attack chain goes undetected. While autonomous cyber weapons have long been theorized, the acceleration of generative AI development has raised the likelihood of their real-world application. Mandia noted that the first observed use of such AI-driven tactics will likely stem from criminal actors rather than nation-state adversaries. “There is enough R&D happening right now on how to use AI [at legitimate organizations] that the criminal element is doing that R&D as well,” he said.

The attack vectors in question are not expected to involve popular AI models hosted by major firms, which Mandia described as “pretty darn good” at blocking misuse. Instead, the threat is likely to originate from less regulated or fringe models. “It's going to come from some model that's somewhere out there that's less controlled,” Mandia told Axios, pointing to the broader availability of open-source or poorly secured models as the most likely attack platforms. Chester Wisniewski, a global CISO at Sophos cited by Axios, added that while many cybercriminals may already possess the technical capability, they currently lack incentive due to the simplicity of existing financial gain. However, should motivations or threat landscapes shift, the integration of AI in attack chains could evolve quickly and unpredictably.

Mandia’s outlook is informed by years of experience at the forefront of high-profile incident response cases, and his observations align with broader industry concerns around AI’s role in both defense and offense. He referenced historical examples, such as early 2000s credit card fraud schemes, to illustrate how automation has long been attractive to threat actors. As the use of AI by defenders becomes more common, the race is now on to determine whether security teams can stay ahead of adversaries leveraging similar tools. The timing, infrastructure, and scale of these AI-enabled threats remain uncertain, but security leaders are increasingly bracing for what may soon be a new operational norm.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now