Rogue insiders and external attackers have become a growing concern in enterprise business applications.
External attackers leverage stolen credentials to impersonate an insider and connect to applications, while at the same time insiders are not sufficiently monitored in SaaS and home-grown applications. This poses a risk from employees and admins who might misuse and engage in malicious activities.
Detection solutions for users, networks and devices are based on two main technologies: rules and patterns that define illegal or malicious behavior; and statistical volumetric/frequency methods based on averages and standard deviations of activities, such as the number of logins or number of emails.
These technologies are often referred to as user entity behavioral analytics (UEBA). They set baselines for average, standard deviation, median, and other statistical metrics, and then detect abnormal values using these baselines.
Users Don’t Always Follow Rules
Doron Hendler, co-founder and CEO of RevealSecurity, says rules and UEBA have been effective due to major commonalities in the network, device, and user access layers: The market by and large uses a limited set of network protocols and a handful of operating systems.
“However, when it comes to the application layer, UEBA has failed due to the vast dissimilarities between applications,” he says.
Hendler explains that over a decade ago, the security market adopted statistical analysis to augment rule-based solutions to provide more accurate detection for the infrastructure and access layers.
“However, UEBA failed to deliver as promised to dramatically increase accuracy and reduce false positive alerts due to a fundamentally mistaken assumption: that user behavior can be characterized by statistical quantities, such as the average daily number of activities,” he says.
He argues this mistaken assumption is built into UEBA, which characterizes a user by an average of activities. “In reality though, people don’t have average behaviors, and it is thus futile to try and characterize human behavior with quantities such as ‘average’, ‘standard deviation’, or ‘median’ of a single activity,” he says.
UEBA Only Works With the Right Data
David Swift, principal security strategist at Netenrich, says too many companies go into UEBA without changing their thinking about how security event management should work.
“Before ever talking to a vendor, a customer should identify the most important data to the business — these will indicate log data needed — and define the use cases that would constitute a threat, which define the individual indicators and triggers used to build content,” he says. Then they must build models that correlate multiple events and multiple correlations for positive confirmation.
“UEBA only works with the right data,” Swift adds. “Most failed implementations never pulled in identity data, or key applications. Without identity, there is no ‘user’ in UEBA. Without application events, it’s still solving the same old problem — malware detection.”
From his perspective, UEBA is highly successful when a company-critical application and IAM data are included in the deployment.
“When a new business-critical application is analyzed for anomalies, the value to the business when we find insiders and compromised accounts is high,” he explains. “When UEBA is used as better malware detection and new data sources aren’t used, it’s destined to fail.”
Relative to false positives, which UEBA is supposed to help reduce, Swift adds that anomaly-based rules were never meant to have zero false positives.
“Threat chains were always meant to combine multiple indicators into a model with low false positives,” he explains. “It’s always been about models that link multiple indicators together, if we’re going to reduce false positives.” He adds that when done well, threat chains do yield a low (roughly 3%) false-positive rate.
Use Cases for UEBA
Mike Parkin, senior technical engineer at Vulcan Cyber, says that UEBA can be successful in cases where the user’s behavior is very consistent.
For example, with call center personnel, who work from specific locations at specific times, changes in their behavior are obvious.
“On the other hand, people who work in the field, such as salespeople visiting customers, are much more difficult to predict,” he says.
Although he says he doesn’t think the assumption of individuals possessing “average behaviors” is entirely mistaken, the margin of error for people’s behavior is “very, very” broad.
He notes some characteristics, such as typing cadence, can be very distinct, but work patterns, including locations and resource access, can be much more variable. “Keeping UEBA applications focused on the kind of behaviors they can accurately predict will make them more effective, as will the applications themselves improving their analytics to better predict a broader range of behaviors,” he adds.
From Swift’s perspective, there is no “average” — there is only learned behavior and anomalous behavior.
“People are creatures of habit,” he says. “Learning what’s unique about a user or a machine isn’t hard.”
In database terms, this means building a second database outside of the events. SQL statements like “select from where unique” identify normal events; then they must be counted and summed up.
“It’s pretty simple to build behavior profiles, and they do work,” Swift says. “Peer anomalies — you did something others like you don’t do — are a bit less cut and dry, and many are snowflakes. But even with peer groups like title and department, most fall within the norms.”
Parkin points out not every UEBA application is created equal and there is a lot of variation in effectiveness between them, even within the same application as it looks at different aspects of behavior.
“Overall, [UEBA] can be a valuable addition to the stack, but it’s not a silver bullet that can magically identify every threat,” he says.
Source: www.darkreading.com