Device diversity, cloud adoption, remote work, and the increasingly complex software supply chain have significantly expanded today’s attack surface. But despite increased year-over-year investment in security operations, most organizations still only have the resources to address 10% of the millions of issues present in their environment.

Security and risk leaders need to be practical and focus on the small percentage of exposures that represent the most risk to their organization. Security teams already have access to the intelligence they need to power risk-driven vulnerability prioritization, but to harness the full potential of their existing insights, they must first break down barriers caused by existing data siloes.

Everything within the digital ecosystem creates data — from autonomous network and vulnerability scanners to manual spreadsheets. Teams have to understand how each element plays a role in the prioritization decision-making process. They need to consider the threat and exposure management lifecycle to explore the strengths, weaknesses, and opportunities for each resource.

Silo 1: Cyber Asset Management

There are ample approaches to creating a consolidated inventory of all assets and their associated risk posture: legacy spreadsheets, “traditional” network scanners and IT asset management tools, and cyber asset attack surface management (CAASM) platforms.

Depending on the approach, though, teams may only be looking at the “traditional” attack surface instead of considering everything that would be present in a typical multicloud, decentralized, and well-segmented modern network. While progress is being made in this category, it’s still built off point-in-time, state-based insights. The lack of insight into attack behavior therefore impacts their overall effectiveness.

Silo 2: Threat Detection and Response

On the other hand, threat detection and response tools are designed to help organizations understand their attack surface from the adversary’s perspective through analyzing network, user, and machine behavior. Although the quality of data from security information and event management (SIEM) systems is considerable, alert overload makes it extremely difficult for teams to comb through and extract the most pertinent information.

Additionally threat detection and response platforms typically only monitor “known” assets for changes, whereas the greatest threat lies with the changes made to unknown assets. So, while these platforms have come a long way to expedite response and remediation, they still lack visibility into exposures beyond typical software vulnerabilities and misconfigurations. Gartner predicts non-patchable attack surfaces will grow from less than 10% of the enterprise’s total exposure in 2022 to more than half by 2026.

Silo 3: Third-Party Intelligence

There are several ways to gauge the potential impact and exploitability of vulnerabilities, such as the Common Vulnerability Scoring System (CVSS), Exploitation Prediction Scoring System (EPSS), and vendor-specific scoring systems. CVSS is the most common method for prioritizing vulnerabilities.

The greatest risk of relying only on third-party guidance is that it doesn’t consider the organization’s unique requirements. For example, security teams still have to decide which patches to prioritize in a group of “severely critical” vulnerabilities (e.g., those with a CVSS score of 9.0 or higher).

In this case, it’s impossible to make an informed decision using these quantitative methods alone. Factors such as the location of the asset would help teams determine the exploitability of the vulnerability within their own organization, whereas its interconnectivity would give teams an idea of the blast radius — or potential overall attack path.

Silo 4: Business Insights

From configuration management databases (CMDBs) to controls and dependency maps to data lakes — this list wouldn’t be complete without internal business tracking systems. These resources are critical to threat and exposure prioritization due to their strength in demonstrating the connections between devices and vulnerabilities, as well as the overall business criticality and dependency mapping.

But as enriching as they are, custom databases require a heavy manual lift to not only implement, but also keep current. Therefore, due to the rate at which our environments change, they quickly become outdated, making it impossible to accurately survey security posture changes.

Change Is Programmatic, Not Tool-Centric

While each source listed above serves its own purpose and provides a unique layer of valuable insight, none of them serves as a single source of truth for navigating today’s sophisticated threat landscape. That said, they’re extremely powerful when they work together. When combined and correlated correctly, they reveal a comprehensive vantage point that enables teams to make better, more informed decisions.

Many of the valuable insights necessary to drive informed, risk-based decisions either get lost in the siloes of enterprise tech stacks or stuck between conflicting teams and processes. Although modern environments require equally progressive security, there isn’t a single tool or team that can repair this broken process.

Security leaders need to align their cyber asset intelligence to their primary use cases. That may be through mapping their vulnerability prioritization process using third party intelligence, business context, and asset criticality, or by targeting specific control frameworks such as NIST Cybersecurity Framework or the CIS Critical Security Controls to use their security data to drive an effective security improvement program.

Source: www.darkreading.com