Cloud breaches are inevitable.

It’s the reality we live in. The last few years have demonstrated that breaches occur, no matter how much security organizations put in place. The increased complexity of organizations — where a single mistake or vulnerability can lead to a compromise — coupled with the increased motivation, sophistication, and dedication of attackers, means breaches are here to stay. At the same time, organizations are transitioning to the cloud, making attackers shift focus to rapidly increase their attacks on cloud environments.

While this means that cloud breaches are inevitable, that doesn’t mean we can’t do anything about them. By better understanding cloud attacks, organizations can better prepare for them. Then, hopefully, they can contain and respond to attacks faster, reducing their impact and averting a crisis.

This two-part series will explore real-world attacks that unravel, investigate, and share insights on practical ways organizations can respond to cloud attacks in today’s threat landscape.

SaaS Marketplace Hack Leads to Major Breach

In the last few years, software-as-a-service (SaaS) platforms have been replacing traditional enterprise applications, making it easier for organizations to adopt and manage them. Part of the value such platforms provide is the ability to integrate and expand rapidly, supporting the ever-growing demands of users for more functionality. Further enhancing their platforms, SaaS vendors are creating a marketplace to allow third-party providers to add functionality and integration for its users. These marketplaces, however, can introduce substantial third-party risk, as can be seen in the following scenario.

After a company was notified by GitHub of a potential risk, GitHub didn’t provide any specific indicators of unauthorized access. Instead, GitHub provided only a generic notice that DeepSource, one of the apps the company had previously been using on the marketplace, was breached, making it hard to understand whether the organization was affected or not. An initial review done by the company of its GitHub logs did not help, as it could not see any access to its code by DeepSource.

The reason for this was rather simple — and it is at the heart of how many SaaS marketplaces operate. A few months before the breach, one of the company’s developers tried out the DeepSource app, wherein the developer granted DeepSource access to the code under his username. When the attackers used DeepSource’s access to download the entire code repository, what appeared in the logs was a pull request under the name of a legitimate user. The only indicator that it was malicious was the identification of an irregular IP address, which eventually was tied to other known attacks.

At this point, it became clear that the entire code repository had been stolen, and a full-blown response was needed to contain and recover from the breach. As with most code leakage cases, the immediate concern was access to secrets (passwords/keys) in the code. While it is generally bad practice to have hardcoded secrets in code, it is still a common practice by many — and this case was no different. By identifying the relevant secrets in the code, the next steps of the attackers — which, as expected, started accessing some of the Amazon Web Services (AWS) infrastructure — was predicted. By quickly identifying them, the company was able to block access to all relevant resources, contain the breach, and recover before more damage could be done.

Cryptominer Injected into a Virtual Machine Template

What if one could mine cryptocurrency at somebody else’s expense? This idea is at the heart of many cryptomining attacks we see today, where attackers take over cloud resources, then run cryptominers on them collecting cryptocurrency while the hacked organization pays the cloud compute bills for it.

In a recent incident, a company had identified unknown files on 18 AWS EC2 machines they were running in the cloud. Looking at the files, it became clear they had fallen victim to the ongoing TeamTNT Watchdog cryptomining campaign. It was initially unclear how the attackers managed to infect so many EC2 instances, but as the investigation unfolded, it became apparent that instead of targeting individual machines, the attackers targeted the Amazon Machine Image (AMI) template used to create each machine. During the creation of the original image, there was a short time where a service was misconfigured, allowing remote access. TeamTNT used automatic tools to scan the network, identify it, and immediately place the miners there, which then got duplicated to every new machine created.

This highlights another common attack pattern: implanting cryptominers in publicly available AMIs through the Amazon marketplace.

As demonstrated by these cases, cloud attacks are here to stay. They’re different from what we’re used to observing, so it’s time to better prepare for their arrival. Stay tuned for part two, where we will dive into cloud ransomware and how to avoid it.

The second part of this article is scheduled for Thursday, Sept. 1, at 10 Eastern.

Source: www.darkreading.com