Advanced persistent threats (APTs) aligned with China, Iran, North Korea, and Russia are all using large language models (LLMs) to enhance their operations.

New blog posts from OpenAI and Microsoft reveal that five major threat actors have been using OpenAI software for research, fraud, and other malicious purposes. After identifying them, OpenAI shuttered all their accounts.

Though the prospect of AI-enhanced nation-state cyber operations might at first seem daunting, there is good news: none of these LLM abuses observed so far have been particularly devastating.

“Current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool,” Microsoft noted in its report. “Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI.”

The Nation-State APTs Using OpenAI

The nation-state APTs using OpenAI today are among the world’s most notorious.

Consider the group Microsoft tracks as Forest Blizzard, but is better known as Fancy Bear. The Democratic National Committee-hacking, Ukraine-terrorizing, Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU)-affiliated military unit has been using LLMs for basic scripting tasks — file manipulation, data selection, multiprocessing, and so on — as well as intelligence gathering, researching satellite communication protocols, and radar imaging technologies, likely as they pertain to the ongoing war in Ukraine.

Two Chinese state actors have been ChatGPT-ing lately: Charcoal Typhoon (aka Aquatic Panda, ControlX, RedHotel, BRONZE UNIVERSITY), and Salmon Typhoon (aka APT4, Maverick Panda).

The former has been making good use of AI for both pre-compromise malicious behaviors, gathering information about specific technologies, platforms, and vulnerabilities, generating and refining scripts, and generating social engineering texts in translated languages as well as post-compromise, performing advanced commands, achieving deeper system access, and gaining control in systems.

Salmon Typhoon has primarily focused on LLMs as an intelligence tool, sourcing publicly available information about high-profile individuals, intelligence agencies, internal and international politics, and more. It has also largely unsuccessfully attempted to abuse OpenAI for help developing malicious code, and researching stealth tactics.

Iran’s Crimson Sandstorm (Tortoiseshell, Imperial Kitten, Yellow Liderc) is using OpenAI to develop phishing material –— emails pretending to be from an international development agency, for example, or a feminist group — as well as code snippets to aid their operations for web scraping, executing tasks when users sign in to an app, and so on.

Finally there’s Kim Jong-Un’s Emerald Sleet (Kimsuky, Velvet Chollima) which, like the other APTs, turns to OpenAI for basic scripting tasks, phishing content generation, and researching publicly available information on vulnerabilities, as well as experts, think tanks, and government organizations concerned with defense issues and its nuclear weapons program.

AI Isn’t Game Changing (Yet)

If these many malicious uses of AI seem useful, but not science fiction-level cool, there’s a reason why.

“Threat actors that are effective enough to be tracked by Microsoft are likely already proficient at writing software,” Joseph Thacker, principal AI engineer and security researcher at AppOmni explains. “Generative AI is amazing, but it’s mostly helping humans be more efficient rather than making breakthroughs. I believe those threat actors are using LLMs to write code (like malware) faster, but it’s not noticeably impactful because they already had malware. They still have malware. It’s possible they’re able to be more efficient, but at the end of the day, they aren’t doing anything new yet.”

Though cautious not to overstate its impact, Thacker warns that AI still offers advantages for attackers. “Bad actors will likely be able to deploy malware at a larger scale or on systems they previously didn’t have support for. LLMs are pretty good at translating code from one language or architecture to another. So I can see them converting their malicious code into new languages they previously weren’t proficient in,” he says.

Further, “if a threat actor found a novel use case, it could still be in stealth and not detected by these companies yet, so it’s not impossible. I have seen fully autonomous AI agents that can ‘hack’ and find real vulnerabilities, so if any bad actors have developed something similar, that would be dangerous.”

For those reasons he adds, simply, that “Companies can remain vigilant. Keep doing the basics right.”

Source: www.darkreading.com