Trusted insiders can harm an enterprise in all kinds of ways, from intellectual property theft, financial fraud and data breaches to espionage, sabotage and even terrorism. Moreover, the root causes of their acts can range from malicious intent to willful negligence — sometimes even from pure carelessness.
Typically, one case will look very different from the next, and it is precisely this complexity and behavioral variability that makes finding insider threats so tricky. The IP thief may be motivated by greed, while the saboteur is driven by disgruntlement and the spy by a nation’s or competitor’s interest. When the motivations differ, their underlying risk indicators differ as well; just look at the yawning gap between indicators of malice vs. sloppiness.
Add it all up and, if you’re responsible for InfoSec or PerSec at your organization, you have a daunting task on your hands. Detection is hard enough; prevention, the real holy grail, will be far harder still.
I’ve been a long-time proponent of the use of Bayesian models for solving an array of wicked security problems. In a two-part blog post in CSO Online in early 2017, I described first their capabilities and some common objections to their use and secondly their unique power in security analytics applications like insider threat mitigation.
Factoring in an additional year’s worth of user complaints about excessive false positives and analyst overload, it’s even more clear today that Bayesian model-based analytics are a unique force multiplier. Meanwhile, security analytics approaches relying only on machine learning or rules-based systems seem to create new problems even as they solve existing ones. It’s also clear that analytics tools that rely solely on data sourced from network and device logs can’t capture the kinds of psychological and behavioral factors so vital to insider threat detection, or to similarly complex security challenges like account compromise.
This got me thinking: how well understood are the real-world benefits of Bayesian modeling, especially when applied to the insider threat problem? It turns out the answer is quite well — but mostly by data scientists rather than security decision-makers. A search of the scientific literature reveals any number of research studies where a Bayesian model that captured experts’ beliefs and inferred probabilities from them proved superior to other AI and analytics approaches. The studies are all carefully designed and executed — and their conclusions are unequivocal: Bayesian models, properly built and applied, are terrific at predictively identifying risk from insider threats, meaning they are great not just for detection but for prevention as well.
Take Pacific Northwest National Laboratory (PNNL). Nearly eight years ago it conducted an experiment using Bayesian models with the express purpose of finding malicious insider threats, because “any attempt to seriously address the insider threat, particularly through proactive means, must consider behavioral indicators in the workplace in addition to more traditional workstation monitoring methods.” Exactly.
PNNL recruited human-resources specialists and captured a list of 12 “psychosocial” behaviors that they judged to be highly indicative of future malicious insider risk — including disgruntlement, stress, anger-management issues, disregard for authority, confrontational behavior and lack of dependability. The 12 behavioral indicators were implemented as binary (i.e., true/false) random variable nodes in a Bayesian inference network, or model. Prior probabilities of each indicator, based on the subjective judgments of the specialists, were then assigned to each variable along with their relative weights. Finally, the experts determined the relative influence of each random variable on the risk output.
When PNNL asked a group of human evaluators to rate 24 employee cases on a 10-point risk scale from ‘highest concern’ to ‘no concern, it found striking similarities between their consensus views as to the employees’ likely future riskiness and results from the same employee data applied to the Bayesian model. (For comparison purposes PNNL also ran the data through a linear regression, a feed-forward artificial neural network and a counting model, but found the Bayesian model: [1] was better suited to working with missing data because it used prior probabilities; [2] provided useful probability estimates where the other methods could not; and [3], at least in comparison to the neural network, was “more acceptable to users because it provides simpler explanations of why specific risks are assigned.”)
In a paper PNNL published on the experiment, the authors wrote: “…test results showed that using the twelve indicators and a good model, the insider threat risk among employees can be assessed to be highly correlated with expert HR judgments.” Other benefits of the Bayesian model approach were also apparent to PNNL: “…the ‘average’ risk predictions generated by a model representing these experts’ consolidated wisdom is better than the prediction that an individual expert can provide due to possible information processing limitations, individual biases, or varying experiences. An expert system model also enables the automatic screening of staff members, which is consistent [with] and independent of the experiences an individual human resources staff may have.”
PNNL concluded: “We believe that if the developed model is incorporated to monitor employees with proper recording of the behavioral indicators, and combined with detection and classification of cyber data from employees’ computer/network use, the integrated system will empower a HR/cyber/insider threat team with enhanced situation awareness to facilitate the detection and prevention of insider crimes.”
At Haystax Technology, we’ve achieved precisely these kinds of results when deploying our Haystax for Insider Threat solution to government and private-sector organizations, and have done it in exactly the same way: by combining Bayesian models, which excel at analyzing subjective indicators of risk, with machine learning and other AI techniques, for more qualitative analysis of relevant activities and events.
Nor is the benefit of a model-driven insider threat mitigation approach simply that it can prevent potentially huge losses to an employer in financial, technological or reputational terms. Another benefit we have seen, as PNNL put it, is that it creates “a window of opportunity for dealing with the personnel problems affecting these subjects,” thus “helping the employee before a bad situation turns worse.”
# # #
Note: A version of this article first appeared in CSO Online on November 20, 2017.