There has been a lot of talk about generative AI and chatbots like ChatGPT launching cyberattacks. What does that mean for the future of cybersecurity, and how can organizations prepare?

Threat actors are using ChatGPT to launch cyberattacks now. ChatGPT allows threat actors to increase the speed and variation of their attacks by modifying code in malware or creating thousands of variations of social engineering attacks to increase the probability of success. As machine learning technologies advance, so will the variety of ways this technology is used for malicious intent.

Generative AI is heating up everywhere and fundamentally changing everything we know about how cybercriminals develop and deploy attacks with increased speed and variations. The possible uses of AI for malicious intent are in their infancy. AI-powered technologies that leverage generative AI and diffusion models can modify the appearance of video and voice. They will most likely be used for cyberattacks, resulting in unfortunate consequences for organizations.

A race to develop new technologies leveraging AI is happening in voice (written and verbal), video, and more, as is evidenced by OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s AI-powered Bing. All are large language models that exponentially accelerate access to knowledge and rapidly generate new forms of content based on that contextualized knowledge. These tools rely on big data sets and the quality of those data sets.

While ChatGPT has the initial lead as the fastest-growing tool in history, don’t bet against Google’s ability to surpass ChatGPT in the future. It will certainly be a race, unlike anything we’ve seen before. As these new technologies become available, they will be used by organizations and cybercriminals, and both sides will use them to perpetrate and stop cybercrime. The abuse of AI is something we knew would happen for a long time, which is why SlashNext engineers have been developing natural-language generative AI technology for a few years in anticipation of these types of changes in the threat landscape.

How Real the Generative AI Threat Is and How Organizations Can Prepare

Generative AI, chatbots, and diffusion models are all developments in AI that present a real danger to users and businesses. Right now, ChatGPT, in particular, is a real enhancement to malware, business email compromise (BEC), and ransomware threat development. Cyberattacks are most dangerous when delivered with speed and frequency to targets.

With malware, ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines. BEC attacks are target attempts to social engineer a victim into giving valuable financial information or data. These attacks require personalized messages to be successful. Now ChatGPT can create well-written, personal emails en masse with infinite variations. The speed and frequency of these attacks will increase and yield a higher success rate of user compromises and breaches. Legacy security technology doesn’t stand a chance against these types of attacks. 

We must fight AI cyber threats with AI cybersecurity technology. When cybercriminals launch successful attacks, the results are massively disruptive to people, organizations, and the economy. The No. 1 cyber challenge that organizations face globally is human-focused attacks. Cybercriminals are increasing their attacks in LinkedIn, Microsoft Teams, Messenger, and Slack and taking advantage of the most vulnerable part of organizations: its people.

Many organizations are already using AI-based cybersecurity products to manage detection and response. AI technologies with generative AI will become essential technology to stop hackers and breaches. As new technology becomes available, hackers and cybersecurity vendors will use it to perpetrate and stop cybercrime. The abuse of AI is something we knew would happen for a long time, which is why SlashNext has been developing a natural-language generative AI technology for a year and a half in anticipation of these types of threats.

ChatGPT is not new, but OpenAI is the first to make an interface, so it’s more accessible. ChatGPT is not the technology to fend off threats designed with ChatGPT. Still, generative AI technology, which makes ChatGPT possible, will be used to develop cyber defenses capable of stopping malware and BEC threats developed with ChatGPT. 

About the Author

Patrick Harr

Patrick Harr is the CEO of SlashNext, an integrated cloud messaging security company using patented HumanAI™ to stop BEC, smishing, account takeovers, scams, malware, and exploits in email, mobile, and Web messaging before they become a breach.

Source: www.darkreading.com