Technology designers begin by building a product and testing it on users. The product comes first; user input is used to confirm its viability and improve upon it. The approach makes sense. McDonald’s and Starbucks do the same. People can’t imagine new products, just like they can’t imagine recipes, without experiencing them.
But the paradigm also has been extended to the design of security technologies, where we build programs for user protection and then ask users to apply them. And this doesn’t make sense.
Security isn’t a conceptual idea. People already use email, already browse the Web, use social media, and share files and images. Security is an improvement that is layered over something users already do when sending emails, browsing, and sharing online. It’s similar to asking people to wear a seat belt.
Time to Look at Security Differently
Our approach to security, though, is like teaching driver safety while ignoring how people drive. Doing this all but ensures that users either blindly adopt something, believing it’s better, or on the flip side, when forced, merely comply with it. Either way, the outcomes are suboptimal.
Take the case of VPN software. These are heavily promoted to users as a must-have security and data-protection tool, but most have limited to no validity. They put users who believe in their protections at greater risk, not to mention that users take more risks, believing in such protections. Also, consider the security awareness training that is now mandated by many organizations. Those who find the training to be irrelevant to their specific use cases find workarounds, often leading to nonenumerable security risks.
There’s a reason for all this. Most security processes are designed by engineers with a background in developing technology products. They approach security as a technical challenge. Users are just another action into the system, no different than software and hardware that can be programmed to perform predictable functions. The goal is to contain actions based on a predefined template of what inputs are suitable, so that the outcomes become predictable. None of this is premised on what the user needs, but instead reflects a programming agenda set out in advance.
Examples of this can be found in the security functions programmed into much of today’s software. Take email apps, some of which allow users to check an incoming email’s source header, an important layer of information that can reveal a sender’s identity, while others don’t. Or take mobile browsers, where, again, some allow users to check the SSL certificate quality while others don’t, even though users have the same needs across browsers. It’s not like someone needs to verify SSL or the source header only when they’re on a specific app. What these differences reflect is each programming group’s distinct view of how their product should be used by the user — a product-first mentality.
Users purchase, install, or comply with security requirements believing that the developers of different security technologies deliver what they promise — which is why some users are even more cavalier in their online actions while using such technologies.
Time for a User-First Security Approach
It’s imperative that we invert the security paradigm — put users first, and then build defense around them. This is not only because we must protect people but also because, by fostering a false sense of protection, we’re fomenting risk and making them more vulnerable. Organizations also need this to control costs. Even as the economies of the world have teetered from pandemics and wars, organizational security spending in the past decade has increased geometrically.
User-first security must begin with an understanding of how people use computing technology. We have to ask: What is it that makes users vulnerable to hacking via email, messaging, social media, browsing, file sharing?
We have to disentangle the basis for risk and locate its behavioral, cerebral, and technical roots. This has been the information that developers have long ignored as they built their security products, which is why even the most security-minded companies still get breached.
Pay Attention to Online Behavior
Many of these questions have already been answered. The science of security has explained what makes users vulnerable to social engineering. Because social engineering targets a variety of online actions, the knowledge can be applied to explain a wide swath of behaviors.
Among the factors identified are cyber-risk beliefs — ideas users hold in their mind about the risk of online actions, and cognitive processing strategies — how users cognitively address information, which dictates the amount of focused attention users pay to information when online. Another set of factors are media habits and rituals that are partly influenced by the types of devices and partly by organizational norms. Together, beliefs, processing styles, and habits influence whether a piece of online communication — email, message, webpage, text — triggers suspicion.
Train, Measure, and Track User Suspicions
Suspicion is that unease when encountering something, the sense that something is off. It almost always leads to information seeking and, if a person is armed with the right types of knowledge or experience, leads to deception-detection and correction. By measuring suspicion along with the cognitive and behavioral factors leading to phishing vulnerability, organizations can diagnose what made users vulnerable. This information can be quantified and converted into a risk index which they can use to identify those most at risk — the weakest links — and protect them better.
By capturing these factors, we can track how users get co-opted through various attacks, understand why they get deceived, and develop solutions to mitigate it. We can craft solutions around the problem as experienced by end users. We can do away with security mandates, and replace them with solutions that are relevant to users.
After billions spent putting security technology in front of users, we remain just as vulnerable to cyberattacks that emerged in the AOL network in the 1990s. It’s time we changed this — and built security around users.
Source: www.darkreading.com