For the first time ever, the Federal Communications Commission (FCC) Enforcement Bureau has identified a specific threat group as being behind a spate of pernicious robocall campaigns. The group, dubbed “Royal Tiger,” has associates in India, the United Kingdom, United Arab Emirates (UAE), and United States.

The naming of the group represents the first time the FCC has officially created a classification for individuals who facilitate robocall campaigns in order to scam and harm consumers. These entities and individuals are now classified as Consumer Communications Information Services Threats (C-CISTs), in an effort to better label and track them. The idea is to help garner international support amongst global partners in fighting these kinds of threats “before they reach US networks.”

Robocall Royalty

There are several companies in the United States that partake in the Royal Tiger robocall campaigns, all led by an individual using the name Prince Jashvantlal Anand alongside an associate named Kaushal Bhavsar. Some of these companies include PZ Telecommunication LLC, Illum Telecommunication Limited, and One Eye LLC.

Anand has not limited himself to just the United States, however. He is also associated with companies in the UK and India, and reportedly has residences in the UAE and India, using the alias “Frank Murphy” to further Royal Tiger’s schemes. 

Much of the group’s activities include impersonating banks, utilities firms, and government agencies, as well as offering false claims of credit card interest-rate reductions and purchase authorizations that the threat actors claim were placed by their intended victims.

An example of a transcript of a call that Royal Tiger feeds its victims is as follows: “This is an important call regarding illegal enforcement action being executed under your name for suspicious and fraudulent activities carried out using your identity.

“The court has issued a suspension notice along with a warrant against your name. Ignoring this message will be an intentional second attempt to avoid initial appearance before a magistrate, judge, or grand jury for a federal criminal offense.

“This is the final attempt to reach you. For more information, press 1 and hold for the next available officer.”

In 2021, after PZ Telecom and Illum were asked to discontinue the illegal robocall campaigns they were spearheading, Anand created One Eye, which “began serving as a gateway provider for apparently illegal robocalls,” according to the FCC.

A Threat to Individuals and Nations Alike

Imagine that your cellphone is ringing, you answer, and you hear a recorded message — you may have had this exact experience yesterday or last week. Robocalls can be used by one’s dentist or hairdresser to remind them of an upcoming appointment or notify them of a cancellation. But, like many good things, these helpful tools can be twisted into something that is annoying at best and genuinely harmful at its worst. 

Robocalls made by entities with malicious intentions are often aimed at tricking consumers with scams; their goal is to get a consumer to answer the call and respond to their questions, solidifying them as a potential victim to fraud. And other times, an individual may fall victim to spoofing calls, where the name and number that are displayed on the caller ID changes to imitate a real person or company, often convincing enough to encourage someone to pick up the call and give up information in a voice phishing (vishing) scheme.

But these kinds of robocalls can go so far as to pose national threats, such as when a robocall imitating President Joe Biden began circulating, only for the media to discover that it was an AI-generated audio deepfake. The call urged New Hampshire-based citizens to abstain from voting in the primary earlier this year, serving as a wake-up call to many that as large language models (LLMs) continue to evolve, so do the resources and methods of malicious actors. 

Robocalls Evolve With AI

In the aftermath of the presidential robocall, Pindrop Pulse, a service that analyzes audio, helped identify what kind of text-to-speech (TTS) engine was used in the attack by examining the 39-second audio clip through the service’s four-stage process, which includes audio filtering and cleansing, feature extraction, breaking the audio different segments, and scoring each of these segments. The idea was to gain insight into how common the tools are that are needed to carry out these kinds of attacks.

“Pindrop’s deepfake detection engine found, with a 99% likelihood, that this deepfake is created using ElevenLabs or a TTS system using similar components,” Vijay Balasubramaniyan, co-founder and CEO of Pindrop, wrote in a blog post at the time. “Even though the attackers used ElevenLabs this time, it is likely to be a different Generative AI system in future attacks, and hence it is imperative that there are enough safeguards available in these tools to prevent nefarious use.”

And while these kinds of extreme, national circumstances may be fairly rare, fraudulent robocalls are made constantly to everyday people, often involving financial information that can lead to significant financial loss as well as a developing lack of trust in the telecommunications network — and they’re only set to become more convincing and more dangerous as GenAI truly takes off, the FCC warned. That makes its move toward official classification even more important, it said.

“As our investigative targets use more and more sophisticated and clandestine means such as generative AI voice-cloning technology and ‘spoofing’ to obtain sensitive data and defraud consumers, the C-CIST classification tool will allow us to better coordinate with our state, federal, and global regulatory and law enforcement partners to take on these bad actors,” said Loyaan A. Egal, chief of the Enforcement Bureau and chair of the Privacy and Data Protection Task Force.

Source: www.darkreading.com