A recently debuted AI chatbot dubbed GhostGPT has given aspiring and active cybercriminals a handy new tool for developing malware, carrying out business email compromise scams, and executing other illegal activities.
Like previous, similar chatbots like WormGPT, GhostGPT is an uncensored AI model, meaning it is tuned to bypass the usual security measures and ethical constraints available with mainstream AI systems such as ChatGPT, Claude, Google Gemini, and Microsoft Copilot.
GenAI With No Guardrails: Uncensored Behavior
Bad actors can use GhostGPT to generate malicious code and to receive unfiltered responses to sensitive or harmful queries that traditional AI systems would typically block, Abnormal Security researchers said in a blog post this week.
“GhostGPT is marketed for a range of malicious activities, including coding, malware creation, and exploit development,” according to Abnormal. “It can also be used to write convincing emails for business email compromise (BEC) scams, making it a convenient tool for committing cybercrime.” A test that the security vendor conducted of GhostGPT’s text generation capabilities showed the AI model producing a very convincing Docusign phishing email, for example.
The security vendor first spotted GhostGPT for sale on a Telegram channel in mid-November. Since then, the rogue chatbot appears to have gained a lot of traction among cybercriminals, a researcher at Abnormal tells Dark Reading. The authors offer three pricing models for the large language model: $50 for one-week usage; $150 for one month and $300 for three months, says the researcher, who asked not to be named.
For that price, users get an uncensored AI model that promises quick responses to queries and can be used without any jailbreak prompts. The author(s) of the malware also claim that GhostGPT doesn’t maintain any user logs or record any user activity, making it a desirable tool for those who want to conceal their illegal activity, Abnormal said.
Rogue Chatbots: An Emerging Cybercriminal Problem
Rogue AI chatbots like GhostGPT present a new and growing problem for security organizations because of how they lower the barrier for cybercriminals. The tools allow anyone, including those with minimal to no coding skills, the ability to quickly generate malicious code by entering a few prompts. Significantly, they also allow individuals who already have some coding skills the ability to augment their capabilities and improve their malware and exploit code. They largely eliminate the need for anyone to spend time and effort trying to jailbreak GenAI models to try and get them to engage in harmful and malicious behavior.
WormGPT, for instance, surfaced in July 2023 — or about eight months after ChatGPT exploded on the scene — as one of the first so-called “evil” AI models created explicitly for malicious use. Since then, there have been a handful of others, including WolfGPT, EscapeGPT, and FraudGPT, that their developers have tried monetizing in cybercrime marketplaces. But most of them have failed to gather much traction because, among other things, they failed to live up to their promises or were just jailbroken versions of ChatGPT with added wrappers to make them appear as new, standalone AI tools. The security vendor assessed GhostGPT to likely also be using a wrapper to connect to a jailbroken version of ChatGPT or some other open source large language model.
“In many ways, GhostGPT is not massively different from other uncensored variants like WormGPT and EscapeGPT,” the Abnromal researcher tells Dark Reading. “However, the specifics depend on which variant you’re comparing it to.”
For example, EscapeGPT relies on jailbreak prompts to bypass restrictions, while WormGPT was a fully customized large language model (LLM) designed for malicious purposes. “With GhostGPT, it’s unclear whether it is a custom LLM or a jailbroken version of an existing model, as the author has not disclosed this information. This lack of transparency makes it difficult to definitively compare GhostGPT to other variants.”
The growing popularity of GhostGPT in underground circles also appear to have made its creator(s) more cautious. The author or the seller of the chatbot has deactivated many of the accounts they had created for promoting the tool and appears to have shifted to private sales, the researcher says. “Sales threads on various cybercrime forums have also been closed, further obscuring their identity, [so] as of now, we do not have definitive information about who is behind GhostGPT.”
Source: www.darkreading.com