2025-02-06

Google Unveils State-Sponsored Hackers’ AI Experiments on Gemini

Level: 
Strategic
  |  Source: 
Google
Global
Share:

Google Unveils State-Sponsored Hackers’ AI Experiments on Gemini

Google Threat Intelligence Group (GTIG) conducted an analysis of how government-backed threat actors are interacting with Google’s AI-powered assistant, Gemini, to assess whether AI is enabling new cyberattack capabilities. GTIG’s research provides real-world insights into how adversaries are currently using AI. Google notes that “threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities.” The study revealed that state-sponsored groups from Iran, China, North Korea, and Russia have attempted to integrate Gemini into various stages of the cyberattack lifecycle, including reconnaissance, vulnerability research, malware development, and defense evasion. However, the research found that AI has not yet become a transformative tool for cybercriminals, with most usage focused on routine tasks rather than advanced exploitation.

Despite concerns that AI could be leveraged to bypass security mechanisms, GTIG found that most actors relied on basic or publicly available jailbreak techniques rather than sophisticated AI-specific attacks. According to Google, “rather than engineering tailored prompts, threat actors used more basic measures or publicly available jailbreak prompts in unsuccessful attempts to bypass Gemini's safety controls.” Attempts to use AI for phishing techniques, malware development, and reconnaissance often failed due to built-in safety features, preventing AI from providing malicious outputs. While the current findings indicate AI is not a game-changer for cyberattacks, researchers warn that as AI models evolve, threat actors may adapt their tactics to exploit vulnerabilities in more advanced ways.

GTIG’s analysis highlighted distinct patterns in AI usage across different government-backed cyber groups. Among the identified state-sponsored actors, Iranian APT groups were the most frequent users of Gemini, leveraging it for phishing campaigns, reconnaissance on defense organizations, and researching vulnerabilities. “Iranian APT actors were the heaviest users of Gemini, using it for a wide range of purposes,” GTIG reported. North Korean APT actors used Gemini for various phases of their operations, including researching infrastructure, developing payloads, and scripting evasion techniques. Notably, some North Korean actors also used AI to assist in writing job applications and proposals—aligning with known tactics where North Korean operatives attempt to infiltrate Western companies under false identities.

Chinese APT actors focused on reconnaissance, vulnerability research, and scripting to enhance their operations. Their queries included researching lateral movement techniques, privilege escalation, and data exfiltration strategies. Russian APT actors, by contrast, showed limited engagement with Gemini during the analysis period, with their activities centered on modifying existing malware and exploring encryption techniques. Google observed that “APT actors used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques.” While state-sponsored cyber actors are increasingly integrating AI tools into their workflows, Google’s research suggests that existing safeguards effectively prevent AI from enabling significant offensive cyber capabilities.

Although AI has not yet provided groundbreaking advantages to cybercriminals, GTIG emphasizes the need for continued monitoring and security improvements as generative AI evolves. Threat actors have demonstrated interest in AI-driven content creation, phishing automation, and reconnaissance, signaling a potential shift toward AI-assisted cyber operations in the future. Google continues to implement security measures to prevent misuse, stating that “Gemini's safety and security measures restricted content that would enhance adversary capabilities as observed in this dataset.” Google has also highlighted the importance of collaboration across industries and governments to establish robust security frameworks for AI. Their Secure AI Framework (SAIF) outlines proactive measures to mitigate AI-related threats, including adversarial testing, prompt filtering, and improved security guardrails.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now