The recent ransomware incident at Rackspace that took down the company’s hosted Microsoft Exchange server environment has focused attention on the often-risky gamble that security teams take when choosing to mitigate a vulnerability — rather than apply a patch for it.

Last week, Rackspace disclosed that a Dec. 2 intrusion into the hosting company’s Exchange server service environment resulted from its decision to hold off on applying a patch for a server-side request forgery (SSRF) vulnerability in Exchange Server (CVE-2022-41080) that Microsoft had patched in November. The vulnerability, when chained with another previously disclosed remote code execution (RCE) flaw in Exchange Server — tracked as CVE-2022-41082 — gives attackers a way to take complete control of affected servers.

Deferred Patching

According to Rackspace’s chief security officer, Karen O’Reilly-Smith, the company held off on applying the patch for the SSRF flaw over concerns that it would cause disruptive authentication errors. Instead, Rackspace decided to apply a mitigation measure that Microsoft had issued for the vulnerability thinking it would be an effective measure. O’Reilly-Smith said that Microsoft’s notes on CVE-2022-41080 merely described it as a privilege escalation vulnerability and made no mention of the fact that it was part of an RCE chain.

A Microsoft spokesman tells Dark Reading that the company had nothing to share at this time on Rackspace’s comments related to the company’s patch for the SSRF flaw, or the notes that accompanied its disclosure.

Rackspace’s decision to hold off on patching the vulnerability is not unusual, says John Bambenek, principal threat hunter at Netenrich. “Often mitigations are preferable, especially in highly public resources where there is sensitivity to downtime,” he says. In fact, the more public-facing an application is, the more organizations will go for mitigations, he says. 

“Most of the time it can be a good bet if the mitigations are sound and complete,” Bambenek notes. “But it requires a really savvy professional who can read between the lines to make a sound judgement.”

In Rackspace’s case, its mitigation strategy failed because an attacker — later identified as the Play ransomware group — found a way to use CVE-2022-41080 to trigger the CVE-2022-41082 RCE flaw in its environment. Up to that point security researchers had only observed attackers triggering the RCE flaw via a different Exchange Server SSRF vulnerability tracked as CVE-2022-41040, in the combination known as ProxyNotShell. The attack caused widespread service outages for Rackspace customers, many of which are small and midsize businesses.

“Rackspace put mitigations in place in relation to the ProxyNotShell chain disclosed by Microsoft in late September, prior to patches being available, which didn’t happen until November,” an external adviser of Rackspace tells Dark Reading. 

When the patches did become available, Rackspace held off on applying them because of concerns over reported authentication issues related to the patches and because the company already had the appropriate mitigations in place, the adviser says. 

“At that time, there were no known or disclosed remote code execution risks associated with CVE-2022-41080, which CrowdStrike discovered while investigating the Rackspace incident,” the adviser adds.

Skipping Security Patches: A Risky Gambit

The incident highlights the risks organizations take when they rely too much on mitigations alone to keep them safe from vulnerability exploits, says Mike Parkin, senior technical engineer at Vulcan Cyber

Deploying vendor recommended mitigations for a known vulnerability is not supposed to be the end of the issue,” he says. “They’re what you do until the vendor in question can develop a patch and you can deploy it.”

The only time it’s OK to mitigate and not patch is when the vendor has no patch for the vulnerability yet, or there’s some technical reason why an organization cannot deploy it in a target environment, Parkin says. 

“There are going to be cases where change-management procedures delay deploying the patch. But a good process from both change management and security perspectives is to have patches going in as soon as possible while meeting stability concerns,” he says, adding that this is especially true when there are known exploits in the wild targeting a particular vulnerability.

Patching and vulnerability remediation in general remains a major challenge for organizations. A study that vulnerability management vendor Edgescan conducted last year showed that organizations still take an average of 60 days to fix critical vulnerabilities of the sort that tripped up Rackspace. 

The study found that 57% of observed vulnerabilities on enterprise networks were more than two years old and a startling 17% were more than five years old. All of these vulnerabilities had working exploits in the wild, and adversaries — including nation-state actors and cybercriminal groups — had exploited many of them.

Dwindling Time to Exploitation

Making matters worse is the fact that cybercriminals have become much faster at exploiting new vulnerabilities, so the time between initial disclosure and exploit availability has been shrinking rapidly.

The trend pushed the US Cybersecurity and Infrastructure Security Agency (CISA) to issue a directive in Nov. 2021 that requires all federal civilian branch agencies to remediate known exploited vulnerabilities within a specific — usually two-week — timeframe. CISA has also advocated that all organizations refer to its catalog of Known Exploited Vulnerabilities (KEV) regularly to see what vulnerabilities attackers are exploiting in the wild so they can remediate them immediately. CISA adds new vulnerabilities to its catalog only if a patch or clear remedial action is available from the affected vendor.

Richard Stiennon, chief research analyst at IT-Harvest, says the fact that many companies still take 60 days to patch critical vulnerabilities is not surprising given the complexity of the task, especially for large organizations. Patching often involves scheduled downtime, which for many organizations tends to be on early weekend mornings, he says. The task involves taking down all affected severs, installing the patch, and rebooting and testing them before bringing the systems back up. 

“Imagine you are a big company with 2,000 servers that need an emergency patch,” Stiennon says. “Of course, you would apply a mitigation first. You cannot do it the same day.”

Steinnon says cloud adoption has begun changing vulnerability management processes in many organizations. These days, a system that needs patching may be a container or a virtual machine. “Now the process is to mirror the production system, patch it, test it, and swap the updated instances into production with no down time.”

Source: www.darkreading.com