We described in Part 1 of this post why access-badge data is a valuable source of evidence as to whether an employee, contractor, vendor or customer poses a risk to an organization, and how the Haystax Haystax for Insider Threat user behavior analytics (UBA) solution exploits badge time-stamp data. Now let’s look at insights that can be gleaned from other kinds of badge data.
Consider the case of Kara, a mid-level IT systems administrator employed at a large organization. Kara has privileged access and also a few anomalous badge times, so the Haystax ‘events’ generated from her badge data are a combination of [AccessAuthorized] and [UnusualAccessAuthorizedTime] (all events are displayed in green). But because Kara’s anomalous times are similar to those of her peers, nothing in her badge data significantly impacts her overall risk score in Haystax.
Kara’s employer uses a badge logging system that includes not just access times but also unsuccessful access attempts (aka, rejections). With this additional information, we find that Kara has significantly more access rejection events — [BadgeError] and [UnusualBadgeErrorTime] — than her peers, which implies that she is attempting to access areas she is not authorized to enter. Because there are other perfectly reasonable explanations for this behavior, we apply these anomalies as weak evidence to the [AccessesFacilityUnauthorized] model node (all nodes are displayed in red). And Haystax imposes a decay half-life of 14 days on these anomalous events, meaning that after two weeks their effect will be reduced by half.
Now let’s say that the employer’s badge system also logs the reason for the access rejection. For example, a pattern of lost or expired badges — [ExcessiveBadgeErrorLostOrExpired] — could imply that Kara is careless. Because losing or failing to renew a badge is a more serious indicator — even if there are other explanations — we would apply this as medium-strength evidence to the model node [CarelessTowardDuties] with a decay half-life of 14 days. If the error type indicates an insufficient clearance for entering the area in question, we can infer that Kara is attempting access above her authorized level [BadgeErrorInsuffClearance]. Additionally, a series of lost badge events could be applied as negative evidence to the [Conscientious] model node.
A consistent pattern of insufficient clearance errors [Excessive/UnusualBadgeErrorInsuffClearance] would be applied as strong evidence to the node [AccessesFacilityUnauthorized] with a longer decay half-life of 30 days to reflect the increased seriousness of this type of error (see image below). If the error indicates an infraction of security rules, we can infer that Kara is disregarding her employer’s security regulations, and a pattern of this behavior would be applied as strong evidence to the model node [NeglectsSecurityRules] with a decay half-life of 60 days.
Finally, let’s say Kara’s employer makes the ‘Door Name’ field available to Haystax. This not only enables us to detect location anomalies — [UnusualAccessAuthorizedLocation] and [UnusualBadgeErrorLocation] — in addition to time anomalies, but now the Haystax model can infer something about the area being accessed. For example, door names that include keywords like ‘Security,’ ‘Investigations’ or ‘Restricted’ are categorized as sensitive areas. Those with keywords like ‘Lobby’, ‘Elevator’ or ‘Garage’ are classified as common areas. Recreational areas are indicated by names such as ‘Break Room’, ‘Gym’ and ‘Cafeteria.’
This additional information gives us finer granularity in generating badge events. An anomalous event from a common area [UnusualCommonAreaAccessAuthorizedTime/Location] is much less significant than one from a sensitive area [UnusualSensitiveAreaAccessAuthorizedTime/Location], which we would apply to the model node [AccessesFacilityUnauthorized] as strong evidence with a decay half-life of 60 days. Combining this information with the error type gives us greater accuracy, and therefore stronger evidence; a pattern of clearance errors when Kara attempts to gain access to a sensitive area [UnusualBadgeErrorInsuffClearanceSensitiveAreaTime] is of much greater concern than a time anomaly for a common area [UnusualAccessAuthorizedCommonAreaTime]. If the data field for number of attempts is available, we can infer even stronger evidence: if Kara has tried to enter a sensitive area for which she has an insufficient clearance five times within one minute, we clearly have a problem.
There are even deeper insights to be gleaned from badge data. For example:
- We could infer that Kara is [Disgruntled] if she is spending more time in recreational areas than her peers.
- Similarly, if Kara is spending less time in recreational areas than her peers, we could infer that she is [UnderWorkStress].
- In some facilities, accessing the roof might even indicate a threat to oneself.
Finally, consider a scenario in which an individual has several unusual events that seem innocuous on their own, but when combined indicate a concerning behavior. If within a short timeframe Kara accesses a new building [UnusualBadgeAccessLocation] at an unusual time [UnusualBadgeAccessTime] and prints a large number of pages [UnusualPrintVolume] from a printer she has never used before [UnusualPrintLocation], a purely badge-focused or network-focused monitoring system will generate a succession of isolated alerts in a sea of them — while potentially missing the larger and more troubling picture that could have been gleaned by ‘connecting the dots.’
The Haystax model, by contrast, is designed to give events more importance when combined with other events and detected sequences of events. This combination of events would significantly impact Kara’s score (see image below), and an insider threat analyst would see the score change displayed automatically as an incident in Haystax and be able to conduct a deeper investigation.
Decades of research studies and experience gained from real-world insider threat events have strongly demonstrated that malicious, negligent and inadvertent insiders alike all exhibit adverse attitudes and behaviors sometimes months or even years in advance of the actual event.
Badge data, like network data, won’t tell the whole story on its own. But it can deliver critical insights not available anywhere else. And when its component pieces are analyzed and blended with data from other sources — for example evidence of professional, personal or financial stress — the result is contextualized, actionable insider-threat intelligence. It’s a user behavior analytics approach that focuses on the user, not the network or the device.
# # #
Julie Ard is the Director of Insider Threat Operations at Haystax Technology, a Fishtech Group company.
NOTE: For more information on Haystax’s ‘whole-person’ approach to user behavior analytics, download our in-depth report, To Catch an IP Thief.