COMMENTARY

Second of two parts. (Read Part 1: “Stress-Testing Our Security Assumptions in a Word of New & Novel Risks.”)

Achieving security in a future of “unseen until it’s too late” threats forces us to look beyond the endless cycles of discover and patch, identify and neutralize, and sense and respond, to build resilience by stress-testing assumptions and preparing for a future in which those assumptions have become unsustainable.

By deconstructing fundamental assumptions, we can proactively plan for, and thus begin to achieve, future resilience. A basic framework for this work includes the following steps:

  1. Identify a basic assumption and its associated dependencies.

  2. Stress-test the assumption through theoretical compromise or degradation, envisioning a future state in which the assumption is no longer valid.

  3. Identify the risks that emerge in that future state.

  4. Develop mitigations for those risks.

This approach is theoretical, and thus prone to error. Unrestrained imagination can lead to more fiction than reality. But the only way to prepare for unforeseen risks is to imagine the unimaginable and consider ways to mitigate those risks today, while we have the opportunity. 

To illustrate this process, let’s look at some basic assumptions.

Enterprise-centric Cybersecurity

We know that the enterprise is where most data is created, processed, managed, transmitted, and stored, thus, we assume the enterprise is the focal point of cybersecurity. Similarly, most critical infrastructure is built, operated, and maintained by enterprises — which include both public and private sector organizations — so efforts to secure the world’s central nervous system must be focused there. It remains a reasonable assumption. The NIST Cybersecurity Framework, CIS Critical Security Controls, and ISO 2700 series guidelines all focus on the enterprise. Even the National Cybersecurity Strategy assumes the primary role of the enterprise. Personal health information, protected by the Health Insurance Portability and Accountability Act (HIPAA), is assumed to be managed and protected by healthcare payers and providers. 

But what if the forces of the information age and the AI revolution weaken the corporation, which gets eroded or replaced by networks of independent, distributed workers (which is already happening via remote work and the gig economy), or a growing public sector, or something else we don’t yet imagine?

There are numerous risks in this scenario. We are already seeing this with remote workers, who use insecure home or public networks. The human “attack surface” is already the most vulnerable part of the enterprise; the erosion of the enterprise will likely further expose humans to cyber exploitation.

One of the cybersecurity benefits of an enterprise-centric approach has been that experience and expertise can be concentrated where cybersecurity is “happening.” If the corporate structure erodes, so too could the associated ability to implement well-developed security controls (e.g., CIS Controls).

Mitigations could include increased efforts to make humans more cybersecure in settings outside the enterprise, such as in education, through public awareness and alerting protocols (similar to the 911 system for police or emergency medical response). While some of these are already happening, the focus, emphasis, and responsibility would shift away from corporations to public and nonprofit entities.

Data Ownership

We generally assume that humans create data through decision-making, designing, building, organizing, and managing. It naturally follows that humans own (and must protect) that data. Even the ownership of machine-generated data is tied to the human owners of those machines.

But what if the generation of data shifts to non-human entities? We already see that happening with generative AI (GenAI). For now, the GenAI data sphere remains relatively small and limited in scope. But we are not far from autonomous GenAI, which may be deployed to routinely and proactively generate new data, make recommendations, and even take steps to manage processes previously controlled by humans. 

Given that GenAI platforms require vast computing resources and robust large language models (LLMs) to be useful, it’s likely that the most popular platforms are shared resources, much as cloud computing has become. Who, then, would own and protect that GenAI-produced data? What would prevent the generation and dissemination of data that might be flawed, or even dangerous?

Mitigating future risks could involve the implementation of secure-by-design principles to scale security controls as GenAI platforms “grow.” Proper segmentation could enable discrete users to leverage shared foundational LLMs, while preventing data leakage beyond that user’s scope (work that is already underway). There is also talk of AI “kill switches” to serve as emergency stop mechanisms to ensure human primacy. GenAI is an area where security considerations must be deliberated from the outset.

The Way Forward

This basic framework for stress-testing assumptions is a way to build future resilience. Chief security officers (CSOs) and cybersecurity professionals must look carefully at the assumptions that they take for granted. Because, as reasonable as they may be, they have a shelf life. And we know from experience that the more basic the assumption, the more devastating the compromise. 

Source: www.darkreading.com