For more than a year now, industry analysts have been warning that user behavior analytics (UBA) would become fragmented as a market segment and could eventually recede into irrelevance.
In a new Haystax Insider Threat Report presenting the results of an industry survey conducted by our partner Cybersecurity Insiders, we see similar signs. For example, while 84% of companies surveyed “monitor user behavior in one way or another,” only 29% said they used UBA to detect insider threats. And those that did, according to the survey, mostly relied on “server logs to track user behavior (45%), followed by dedicated user activity monitoring solutions (33%), and monitoring features natively provided by the business apps (30%).”
It’s increasingly clear that the term UBA has come to mean something quite narrow: analysis of user behavior on networks and other systems, and the application of advanced analytics to detect anomalies and malicious behaviors in those systems.
The problem with this approach is that it actually doesn’t discover anything truly useful or actionable about the insider threat — such as intent or possible contributing stressors — but rather focuses almost exclusively on the individual’s device and network activity. No wonder the term ‘alert overload’ has become so prevalent.
At Haystax we take an altogether different approach to analyzing workplace risk, one that is person-centric rather than device- or network-centric. The ‘whole-person’ Bayesian model embedded in our Haystax Analytics Platform contains hundreds of nodes reflecting diverse experts’ judgments as to the most likely indicators of trustworthiness, or its absence.
The model was developed to detect individuals who show an inclination to commit or abet a variety of malicious insider acts, including: leaving a firm or agency with stolen files or selling the information illegally; committing fraud; sabotaging an organization’s reputation, IT systems or facilities; and committing acts of workplace violence or self-harm. It also can identify indicators of willful negligence (rule flouting, careless attitudes to security, etc.) and unwitting or accidental behavior (human error, fatigue, substance abuse, etc.) that could jeopardize an organization’s security.
Just as important as the model is the diversity of the information our customers apply as evidence to the model, allowing it to ‘reason’ and continuously update its ‘beliefs’ about a person’s trustworthiness. Our data connectors ingest information from a broad and diverse array of sources, and include financial, professional, legal and personal data originating from an organization’s own internal sources and from publicly available or third-party sources. Such data can include travel and expense records, access badge logs, financial and legal records, performance reviews and much more.
And, yes, the Haystax platform analyzes network and device data, too — but only uses the resulting machine-learned anomaly signals as but one of many indications of insider risk, so that anaysts are not overwhelmed with false-positive alerts. In the real world, there are plenty of legitimate reasons a person may be working at odd hours, printing large files or downloading sensitive data onto a thumb drive.
Current technologies behind user behavior analytics have not lived up to their early promise, and it’s unfortunate the term has come to be associated with its lowest common denominator.
By contrast, a person-centric approach that predictively analyzes real human behaviors and attitudes and stressors, leveraging previously neglected data sources, is the only way to detect your highest-priority insiders before they do harm. After all, you can’t investigate or fire a device.
# # #
Note: Learn more about the latest insider threat challenges and industry best practices by reading the full Haystax Insider Threat Report-2019, downloadable from our Resources page.