Artificial intelligence-powered cyberattacks are rising exponentially in the Asia-Pacific region, particularly those involving deepfakes.
The United Nations Office on Drugs and Crime (UNODC) tracked a panoply of AI threats in its new report covering cybercrime in Southeast Asia. Cybercrime gangs have been using generative AI (GenAI) to create phishing messages in multiple languages, chatbots that manipulate victims, social media disinformation en masse, and fake documents for bypassing know-your-customer (KYC) checks. They’ve been using it to power polymorphic malware capable of evading security software, and to identify ideal targets, among other nefarious activities.
The standout threat, though, is deepfakes. From February to June 2024, UNODC tracked a 600% increase in mentions of deepfakes in cybercriminal Telegram channels and underground forums. And that’s above and beyond the heavy activity from 2023, when deepfake crimes rose more than 1,500% compared with the year prior, and face swap injections rose 704% in the second half of the year compared with the first.
Deepfake Attacks Proliferate
Cybersecurity leaders in the Asia-Pacific are, like those around the world, anticipating a wave of AI-driven cyber troubles. In an Asia-focused Cloudflare survey published on Oct. 9, 50% of respondents said they expect AI will be used to crack passwords and encryption, 47% expect it will boost phishing and social engineering, 44% think it will boost distributed denial-of-service (DDoS) attacks too, and 40% see it being used to create deepfakes and support privacy breaches.
Most, if not all, of those concerns, though, are no longer theoretical, as some organizations can attest.
In January, for example, an employee at the Hong Kong office of Arup, a British engineering firm, received an email purporting to come from the company’s chief financial officer (CFO) in London. The CFO instructed the employee to conduct a secret financial transaction. The employee later joined a videoconference with the CFO and other participants purporting to be from senior management, all of whom were, in fact, deepfakes. The result: In May, Arup reported losing 200 million Hong Kong dollars ($25.6 million).
Deepfakes of major political figures have spread widely, like the fake video and audio recordings of Singapore’s prime minister and deputy prime minister in December 2023, and the fake video this past July showing a Southeast Asian head of state with illicit drugs. In Thailand, a female police officer was deepfaked in a campaign tricking victims into thinking they were speaking with actual law enforcement.
According to UNODC, half of all deepfake crimes reported in Asia in 2023 came from Vietnam (25.3%) and Japan (23.4%), but the most rapid rise in cases came from the Philippines, which experienced 4,500% more in 2023 than 2022.
It’s all underpinned by a large ecosystem of malicious developers and buyers, on Telegram and in even shadier corners of the Deep Web. UNODC identified more than 10 deepfake software vendors that specifically serve cybercriminal groups in Southeast Asia. Their offerings sport the latest and greatest in deepfake tech, like Google’s MediaPipe Face Landmarker — which captures detailed facial expressions in real time — the You Only Look Once v5 (YOLOv5) object detection model, and much more.
Why Asia Suffers
Though AI-driven cybercrime threatens organizations in every part of the world, it enjoys some particular advantages in Asia.
“Southeast Asia is very densely populated, and a large portion of the population doesn’t know English, or English is not their first language,” notes Shashank Shekhar, managing editor at India-based CloudSEK. The typical signs that might indicate a scam to a native English speaker might not translate to a non-native speaker. Besides that, he notes, “A lot of people are unemployed, looking for jobs, looking for opportunity.”
Desperation has the effect of lowering victims’ defenses. “There are some kinds of scams which only work well in this part of the world,” says CloudSEK threat researcher Anirudh Batra. “Simpler scams are particularly prevalent because of the poverty that this region of the world has seen.”
In the face of intractable socioeconomic forces, those old, tired lines about cyber education and hygiene may not feel like enough. Instead, cybercriminals will need to be stymied at the source: in those underground forums and channels where they trade their deepfake tools and cryptocurrency winnings. It’s been done before.
“It’s possible by collaborating: different countries coming together, sharing intelligence,” Batra says. Though he warns, “Unless these guys are caught, another forum will come up tomorrow. It becomes really difficult to stop them, because the threat actors know that all three letter agencies are looking at the forums — everybody’s crawling everything. So they keep a lot of backups. At any point of time, if [their assets are] seized, they’ll start again with the mirror.”
Source: www.darkreading.com