What if you could instinctively know whom to trust within your organization? Better yet, what if you could automatically reassess a person’s level of trustworthiness, day by day and month by month? If you’re responsible for insider threat prevention at one of the many enterprises that deals with sensitive data or operations, chances are you would jump at the chance to gain that level of insight.
In many cases an individual will engage in personal behaviors that might lead to a lapse in professional judgment or discretion. Knowing when that shift occurs is critical, since virtually every day brings news of a fresh data breach, intellectual property theft or other malicious event either instigated or abetted by a supposedly trusted insider. Currently, however, most government and private organizations operate on the principle that ‘once you’re in, you’re in for good,’ even if the background and reference checks on which the clearance was based are out of date.
I wrote years ago about a concept I now call Continuous Trustworthiness: the idea that in a world of growing asymmetric threats an organization has not just a right but an obligation to systematically reevaluate on a regular basis the trustworthiness of employees and contractors involved with its most sensitive operations. It’s something I continue to believe is desperately needed today, not just for national security but for many other aspects of our personal or business lives.
For example, what if your financial advisor was going through a personal bankruptcy or had his driver’s license suspended after multiple DUIs? Would you trust him with your life savings? And how would you even know if he was engaged in high-risk behaviors in his personal life? This is not an idle fiction: a 2014 article in the Wall Street Journal identified 1,600 brokers who had bankruptcy filings or criminal charges that weren’t publicly reported. And their clients had no way of knowing.
Continuous Trustworthiness to my mind is a data-informed, analytical way to dynamically prioritize (and reprioritize) the risk a person’s actions pose to an enterprise. It requires that we have a mathematical model with predetermined thresholds for what trustworthy behaviors and characteristics – or threatening ones – look like. Then, relevant data can be collected and applied to the model automatically so that significant issues, like a felony arrest, are identified, or so that deviations from a person’s normal pattern of life may be detected. These deviations may allow for early warning and prevention, perhaps even enabling a business to offer help to an employee going through difficulties.
Those data inputs can take many forms, too. Putting aside for a moment controversies over the acceptable use of social media data for employment decisions and the like, organizations can quickly learn about various kinds of risk-indicating behavior by looking at bankruptcy, divorce, arrest and other public records, not to mention their own internal data repositories like HR files, performance reviews and even badge scans. Much of the data is free or available for pennies, and storing it is cheap. The problem is that the data typically is in a form that can be read only by humans, not applied to algorithms or models. As more data comes in, more humans are needed to analyze it. Everyone quickly gets overwhelmed and paralysis ensues. What’s needed is a mechanism to automate the process and deploy it at machine – not human – scale.
Fortunately the technology for this exists today. A financial services company automatically alerts customers about potential fraudulent activity on their credit cards; why do government agencies have no tool to instantly detect when a person with a Top Secret clearance has purchased a plane ticket to China or Russia without advance disclosure? Uber’s remarkable app not only automates the call for a car and the payment, it also continuously keeps track of each driver and customer rating; why do traditional taxi companies evaluate their drivers only once (if that) and why don’t the drivers have a way of knowing if the latest passenger poses a threat or not?
So we can in fact automate the risk analysis of people in the same way credit card companies and car-hailing apps have. It’s mainly about using algorithms and models, tuned to the unique characteristics of a position of trust, and feeding them with appropriate data to provide automated indications of increased risk. The accompanying analytics would automate a process currently being done by roomfuls of overworked analysts – or nor being done at all.
In practical applications of Continuous Trustworthiness, school staff would have different models than financial advisors, and both of those models would be different from the Top Secret-cleared analysts’. But what these models have in common is that they describe the characteristics of trust and allow users to apply data and continuously evaluate an individual’s risk. The kinds of data may also be different based on the position someone holds, for example using ‘private’ data for holders of Top Secret clearances but perhaps not for those at the Secret level.
To be sure, an organization could use Continuous Trustworthiness to grant an initial clearance. But the real value is in using it every day like Uber does: maybe to prevent an analyst from exploring network drives that contain sensitive data that she shouldn’t normally look at, if her pattern of life indicates a trend towards higher risk. Or a different Continuous Trustworthiness model might limit the trading access of a trader who has received three speeding tickets in the last two months, indicating risk-seeking behavior and poor judgment in his personal life that may affect the decisions he makes at work.
It seems that every time there is a major event, we later learn that it was possible to see that a person was a high risk. From the Sandy Hook and Orlando shootings to the latest leaks of sensitive government and corporate data, each of the actors had left a long trail of information indicating that they posed a risk or that they were deviating from their normal pattern of life. I get that forensics is always easier than detection, but the time has come to pay serious attention to detecting and preventing such events.
I certainly don’t want to suggest that we need a surveillance state where anything and everything that anyone does is subject to persistent collection and analysis. In fact quite the opposite: we must do this while protecting civil rights and liberties and avoiding indiscriminate surveillance. But I believe we can strike a balance that analytically identifies people in positions of trust who may present a risk either financially, or to the safety of those around them, or to national security. Positions of trust should require that some behaviors are unallowable and we should not rely solely on infrequent reviews or self reporting, which are clearly insufficient for important positions. The technology is readily available to automate these reviews and allow them to be performed more frequently.
Bryan Ware is CEO of Haystax Technology.