By Chetan Venkatesh, CEO and Co-Founder of Macrometa

The simplest answer is…

No.

But, if history has taught security professionals anything it’s that the most secure network is the one without any users. It also happens to be the least useful of networks.

So what are we actually asking when we discuss security and the edge? Most often, the question is usually hiding the subtext of:

“In a new deployment methodology involving edge data (in a Point of Presence or on a new spectrum) distributed globally, do the techniques that I use for security today still apply?”

The answer to this question differs based on your experience and deployment architecture. However, there are some commonalities.

Our industry has developed standard requirements of physical security, software security, access control, etc., that tend to be rolled up into a series of compliance schemes. These compliance programs absolutely apply in the era of edge computing and edge data centers.

The edge tends to conjure up the image of an outdated server living in a remote office (perhaps shoved in a corner under a dusty desk). This is no longer true. Rather, massive deployments of hardware enabling real-time, low-latency experiences (regardless of geographic region) are becoming increasingly normal and, I would assert, will become more prevalent in the future.

Your current compliance knowledge and your current expertise remains relevant. There are new concepts to learn, of course, and new considerations as data proliferates across the edge in global footprints. But the basis remains the same. It’s extremely important that the tools adopted in edge deployments support your desire to stay compliant with data regulations.

When choosing a vendor, ensure that you have the required controls to define what data and events are cached, stored, and processed. And, given a global deployment, you are able to define this on a per-region basis. When the internal compliance requirements are met, and appropriate certifications are obtained, edge deployments enable a variety of new and exciting capabilities not yet feasible.

With the basics achieved, what becomes even more interesting is what edge deployment paradigms mean for the security and compliance workflow offered by companies.

First, it’s important to understand that these edge deployments are done to extend capabilities while also providing a fast round-trip response time. Today, it’s absolutely possible to achieve a P90 round trip latency of less than 50ms globally. This speed, when coupled with data controls, powers exciting new capabilities.

Consider, for example, real-time event correlation.

In a globally-deployed environment, you’re able to offer workflows that can detect complex event patterns and isolate threats across multiple nodes in real time. In an industry like financial services, the ability to offer both real-time decisioning and real-time compliance are dependent on strict event correlation. You can also imagine this being broadly applicable to FINRA compliance that requires mandatory reporting of communications and over-the-counter transactions.

As another example, think about real-time threat & anomaly detection

At first glance, it may seem to be a subset of the above, but the ability to perform real-time detection of complex event patterns and threat isolation enables security alerting in a powerful new fashion. Data manipulation functions are used to spot early trends or patterns in data and prevent or quickly respond to business opportunities and threats.

Or, let’s spend some time discussing the need for access control.

Perhaps you have some familiarity with ABAC (attribute-based access control) or RBAC (role-based access control). Edge deployments, with real-time filtering, enable an entirely new category of “location-based access control”. Basically, you can think of all the existing power of RBAC/ABAC coupled with the location-based identifiers of where the user, application, etc. is connecting. For companies maintaining some set of personally identifiable information (PII), you could define granular policies on workflow restrictions when certain parts of a schema are accessed. In essence, using physical location (latitude, longitude, even altitude) to build complex rules that enrich, enhance, present, and analyze data. This allows you to control and delegate analytics on sensitive/encrypted data to the edge compute and then serve the result set only when fully complete.

The workflow customizations are nearly endless. And the ideation is exciting. In fact, the workflows mentioned above are based on conversations that I have daily.

So then, is the edge really secure?

As secure as your deployments today.

Or, perhaps, it is not inherently insecure.

It could even be more secure as you optimize your infrastructure globally.

Importantly, edge deployments (when combined with stateful services, intelligent filtering, and fine-grained data control) can be deployed extremely securely. And, you can enable workflows that enhance compliance and offer unique, secure capabilities for your users and customers.

The era of edge deployments is here… and it is here to stay. Start planning your compliance wisely and make sure to dream about the new, exciting workflows you can now enable.

About the Author

Chetan Venkatesh AuthorChetan Venkatesh, CEO and Co-Founder of Macrometa – a silicon valley based edge computing startup. Chetan Venkatesh is a technology startup veteran & executive focused on enterprise data center, cloud infrastructure and software products/companies. He has 20 years of experience in building primary data storage, databases and data replication products. Chetan holds a dozen patents in the area of distributed computing and data storage. Chetan can be reached online at https://twitter.com/Macrometa  and at our company website https://www.macrometa.com/

Source: www.cyberdefensemagazine.com