Google’s Threat Analysis Group (TAG) spent 2022 working to disrupt the online presence of pro-Chinese influence operation (IO) Dragonbridge (aka Spamoflage Dragon) in 2022, wiping out more than 50,000 instances of activity across Twitter, YouTube, Blogger, and other channels.
The report added that despite producing plenty of content, Dragonbridge failed to attract an organic audience, mainly due to the low-quality, nonsensical nature of the content, which mostly consists of apolitical, spammy, often nonsensical clips of sports, food, or animals.
Also, “blurry visuals, garbled audio, poor translations, malapropisms, and mispronunciations are also common,” the report noted. “The content is often hastily produced and error-prone — for example, neglecting to remove Lorem Ipsum text from a video.”
Of the 56,771 YouTube channels created by Dragonbridge and deactivated by TAG last year, nearly 60% of the channels had zero subscribers and 42% of the videos posted on these channels had zero views.
Roughly 95% of Blogger blogs received 10 or fewer views, and more than 96% of the posts had zero comments.
The report said it has closed 100,960 accounts across multiple channels, including YouTube, Blogger, and AdSense over the operation’s lifetime.
The group does generate some pro-China, anti-US messaging, in Mandarin, English, and other languages: For instance, while the pro-China content praises the country’s COVID-19 pandemic response, it criticizes the US for meddling in international affairs, with one video portraying voting as useless. But these themes represent a small fraction of the content.
Despite the nearly nonexistent levels of engagement, Dragonbridge continues to experiment with content formats and attempts to improve the general low quality of its efforts, the report noted.
Dragonbridge joins other China-based IO campaigns, including HaiEnergy, a fake-news influence campaign leveraging at least 72 inauthentic news sites to push content strategically aligned with the political interests of the country.
These operations can be dangerous: In the US, for instance, disinformation campaigns were deployed around last year’s midterm elections in an attempt to change attitudes of undecided voters and energizing supporters to get out and vote.
Google TAG researchers say that Dragonbridge likewise has the capability to become a more potent threat.
Ramping Up Activity During Political Flashpoints
Dragonbridge activity ramped up in July 2022 following US House Speaker Nancy Pelosi’s announcement of a possible visit to Taiwan, with the group’s rhetoric growing more belligerent as the Chinese People’s Liberation Army (PLA) prepared drills around the island.
Dragonbridge “displayed unusually coherent behavior in using uniform hashtags and titles across channels, while swiftly and repeatedly uploading topical, high-production-value content that was not interspersed with the usual misdirecting spam,” the report noted.
Despite the lack of community engagement and seemingly slipshod content, Dragonbridge manages an extensive network of Google accounts that it likely obtains from bulk account sellers, which had been previously acquired for financially motivated activity before going dormant.
The report also noted that Dragonbridge is experimenting with higher-quality forms of content with real human voices instead of computer-generated narration, more sophisticated “news-like” chat formats, and animated political segments.
The persistent level of content distribution and the network’s attempts at innovation when it comes to tactics and techniques are a continued cause for concern, TAG noted.
Dragonbridge to Nowhere?
“What’s interesting is that nobody is questioning the volume of resources expended to address this,” says Andrew Barratt, vice president at Coalfire, a provider of cybersecurity advisory services. “This could be mechanism used as part of a bait-and-switch style scam, keeping Google busy with lots of takedowns — this content is clearly not being watched.”
Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of software-as-a-service (SaaS) for enterprise cyber-risk remediation, adds while it may not appear successful, there are plenty of people out there who will fall for even the most outrageous misinformation.
“While these appear ineffective, even against the gullible, there’s a good probability that there are a lot of them out there that are more successful and have managed to evade deletion,” he says.
Parkin adds there are multiple possible reasons for doing what the group seems to be doing, from simply wasting resources to using these obviously spammy accounts for training a machine learning model on how to avoid being identified and removed.
Barratt agrees Dragonbridge‘s relentless onslaught could just be an indication of capability, showing that the group can find ways to drain Google’s resources using its own tools against it.
“The level and depth of effort here goes way beyond the traditional script-kiddie disruptions, indicating it could even be a group looking to show off capabilities,” he adds. “Nobody seems to be saying directly that it’s a state-sponsored endeavor, which could perhaps be to wave off further political tensions.”
Parkin cautions that while It looks like the threat is “script-kiddie level”, that may be a mask for something more subtle.
“While it may not be an actively state-sponsored group, the sheer volume does imply more resources than the typical script kiddies can pull together,” he says.
Barratt points out that with access to major cloud providers, scale is easily achievable by anyone — but the interesting piece of this is that a lot of the real cost is absorbed by the platforms being focused on.
“Custom bot development can spin up accounts, drop content, and then move to promote it; [it is really a small expense for Dragonbridge] compared with the cost of hosting the video, reviewing it, and taking it down,” he says. “It’s a very high return on investment if you measure your returns in the cost your adversary faces.”
From his perspective, this is more likely to be a disinformation equivalent of performing military maneuvers at the border.
“Someone is showing someone else what they can do and how hard it is to stop,” he says.
Parkin adds it’s possible to programmatically create multiple accounts, post videos, and cross-link them all in the comments.
“If that’s how it’s being done, then it doesn’t take massive resources, and it’s possible a small group with the right skills could pull it off,” he says. “But without looking at Google’s data, it’s hard to say whether that’s what was done here.”
Dark Reading reached out to Google TAG for clarification, and will update this post with any additional information.
Source: www.darkreading.com