OpenAI and Microsoft Thwart State-Backed Cyber Threats Exploiting AI

  |  Source: 
Microsoft & OpenAI

OpenAI and Microsoft Thwart State-Backed Cyber Threats Exploiting AI

OpenAI, in collaboration with Microsoft Threat Intelligence, has taken action against five state-affiliated threat actors who attempted to exploit AI services for malicious cyber activities. These actors, including Charcoal Typhoon and Salmon Typhoon from China, Crimson Sandstorm from Iran, Emerald Sleet from North Korea, and Forest Blizzard from Russia, had their associated OpenAI accounts terminated. The activities of these threat actors ranged from researching companies and cybersecurity tools to generating content for phishing campaigns and understanding malware evasion techniques. Microsoft emphasizes, the actions of these threat actors align with previous red team assessments, indicating limited capabilities of AI models for malicious cybersecurity tasks beyond what is achievable with non-AI powered tools.

Microsoft's detailed investigation into these actors revealed specific behaviors aligned with their broader cyber espionage and operational goals. For instance, Forest Blizzard focused on researching satellite communication protocols and radar imaging technology, which may have applications in military operations, as well as seeking scripting assistance for potentially automating or optimizing technical operations. Emerald Sleet's activities included identifying defense experts and organizations, understanding vulnerabilities, and drafting phishing content. Crimson Sandstorm appeared to use AI services for app and web development support, content generation for spear-phishing campaigns, and malware evasion research. Charcoal Typhoon and Salmon Typhoon's uses were similarly oriented towards intelligence gathering and operational support, with Charcoal Typhoon also generating scripts and researching cybersecurity tools, and Salmon Typhoon translating technical documents and researching processes for hiding on systems.

The joint effort between OpenAI and Microsoft demonstrate a broader initiative to secure AI technologies from misuse by sophisticated threat actors. OpenAI details a multi-pronged approach to enhance AI safety, including monitoring and disrupting malicious activities, collaborating within the AI ecosystem for information sharing, and advancing safety mitigations based on real-world misuse cases. Microsoft, on its part, has emphasized a principled approach to AI security, involving the identification and disruption of malicious actor activities, collaboration and information sharing with other AI service providers, and transparency in reporting threat actor activities and countermeasures.

Get trending threats published weekly by the Anvilogic team.

Sign Up Now