Organizations with established insider threat detection programs often deploy security solutions that are optimized to perform network log monitoring and aggregation, which makes sense given that these systems excel at identifying anomalous activity outside an employee’s typical routine — such as printing from an unfamiliar printer, accessing sensitive files, emailing a competitor, visiting prohibited websites or inserting a thumb drive without proper authorization.
But sole reliance on anomaly detection using network-focused security tools has several critical drawbacks. First, few organizations have the analytic resources to manage the excessive number of alerts they generate. They also can’t inherently provide any related ground truths that might provide the context to quickly ‘explain away’ the obvious false positives. And they leverage primarily host and network activity data, which doesn’t capture the underlying human behaviors that are the true early indicators of insider risk.
By their very nature, standalone network monitoring systems miss the large trove of insights that can be found in an organization’s non-network data. These additional information sources can include travel and expense records, on-boarding/off-boarding files, job applications and employment histories, incident reports, investigative case data and much more.
One such source that is often overlooked (and thus underutilized) is data from access control systems. Most employees have smart cards or key fobs that identify them and provide access to a building or a room, and their usage tells a richly detailed story of the routines and patterns of each badge-holder. They can also generate distinctive signals when employees deviate from their established norms.
Although not typically analyzed in conventional security analytics systems, badge data is a valuable source of context and insight in Haystax Technology’s Haystax for Insider Threat user behavior analytics (UBA) solution. Haystax ingests a wide array of information sources — badge data included — and analyzes the evidence they contain via an analytics platform that combines a probabilistic model with machine learning and other artificial intelligence techniques.
The Haystax model does the heavy analytical lifting, assessing anomalous behavior against the broader context of ‘whole-person trustworthiness’ to reason whether or not the behavior is indicative of risk. And because the model is a Bayesian inference network, it updates Haystax’s ‘belief’ in an individual’s level of trustworthiness every time new data is applied. The analytic results are displayed as a dynamic risk score for each individual in the system, allowing security analysts and decisionmakers to pinpoint their highest-priority risks.
In some cases, the badge data is applied directly to specific model nodes. In other cases, Haystax implements detectors that calculate the ‘unusualness’ of each new access event against a profile of overall access; only when an access event exceeds a certain threshold is it applied as evidence to the model. (We also consider the date the access event occurs, so that events which occurred long ago have a smaller impact than recent events. This so-called temporal decay is accomplished via a ‘relevance half-life’ function for each type of event.)
Besides the identity of the user, the time-stamp of the badge event is the minimum information required in order to glean insights from badge data. If an employee typically arrives around 9:00 AM each workday and leaves at 5:30 PM, then badging in at 6:00 AM on a Sunday will trigger an anomalous event. However, if the employee shows no other signs of adverse or questionable behavior, Haystax will of course note the anomaly but ‘reason’ that this behavior alone is not a significant event — one of the many ways it filters out the false positives that so often overwhelm analysts. The employee’s profile might even contain mitigating information that proves the early weekend hour was the result, say, of a new project assignment with a tight deadline. And the anomaly could be placed into further context with the use of another Haystax capability called peer-group analysis, which compares like individuals’ behaviors with each other rather than comparing one employee to the workforce at large.
But badge time-stamps tell only a small part of the story. Part 2 of this post will delve into why, when an organization has additional badge data fields available, Haystax can infer exponentially more information about its employees’ activities.
# # #
Julie Ard is Director of Insider Threat Operations at Haystax Technology, a Fishtech Group company.
NOTE: Want to learn more about Haystax user behavior analytics? Join us for our upcoming Fishtech Group Pro Tour sessions at the following Top Golf locations: Dallas on September 27 and Minneapolis on October 2. Each session will be followed by a networking social and — of course — golf!