By Kevin Kiernan
A hot topic in the personnel security field these days is the concept of ‘continuous vetting’ — the idea that automated checks of both classified and publicly available data sources should replace periodic reinvestigations for cleared personnel. Compared to the traditional reinvestigation, continuous vetting (CV) offers many advantages, namely lower cost and potentially higher effectiveness.
Many practitioners envision a system that simply replicates the old system of periodic reinvestigations in the virtual world. I propose something altogether different, using an approach that relies on commonly accepted risk-management principles.
As I intend to show, a risk-based approach to CV solves many problems inherent in these ‘virtual reinvestigation’ approaches. Questions such as “How often should I perform checks?” and “How do I decide who to check?” turn out to be easily answered once risk becomes the centerpiece of a CV system.
Survival Analysis and Risk Assessments
The key question in using risk to drive a CV system is: “Can we identify the risky people in our population before they do harm?” If we can, and if the answers to that question are accurate enough, we can determine the amount and type of risk an employee is susceptible to before he or she ever commits a bad act. This turns out to be the critical insight for using risk in a CV system.
Haystax uses a probabilistic model of behavioral risk, plus a statistical technique called ‘survival analysis’ to determine the accuracy of our model in assessing risk. Survival analysis is nothing more than sorting a trusted population into low-, medium- or high-risk cohorts and tracking their propensity to commit bad acts over time.
If our model has done its work properly, the high-risk population will be the one that most frequently commits bad acts, followed by medium- and finally low-risk individuals. This pattern is evidence that the tool has correctly identified the risky individuals from the start.
In operational use, the Haystax model produces exactly this pattern in the data. High-risk individuals tend to commit bad acts at between two and three times the rate of low-risk individuals. Medium-risk people tend to commit between 1.1 and 1.5 times as many bad acts as low-risk individuals. This finding is hugely important, as it allows us to begin to understand the necessary rules of risk-based CV.
The big insight of survival analysis is this: If a group of people is, say, three times more likely to commit an incident over a given interval, organizations need to check them three times as often over that interval to equalize the risk presented.
We can define the rules even more precisely, since our tool generates risk assessments corresponding to each of the U.S. government’s 13 adjudicative guidelines for cleared employees. These assessments are essentially a thumbprint for each person’s amount and type of risk (e.g., foreign influence, alcohol abuse, employment misconduct, etc.). A truly advanced CV scheme could use this thumbprint and the results gleaned from the survival analysis to develop a personalized risk mitigation plan for every trusted individual in the system!
Finally, survival analysis gives us a precise estimate of how often to check the ‘safe’ people by estimating how long individual’s risk rating remains useful. As one might expect, the predictive accuracy of the tool gets worse the further into the future it is asked to look. People tend to behave differently during different phases of their lives, and a ‘risky’ person who goes long enough without an incident should eventually cease to be considered risky.
In any case, the survival analysis indicates that risk ratings are good predictors of future behaviors for around two years. Therefore, checking the riskiest people every six months and the ‘safe’ people every two years would seem to be a good place to start. Since the riskiest people comprise about 20% of any given population, this insight should reduce the cost of CV by up to 25%.
One objection tends to lurk in the background of any discussion surrounding data purchases, background checks or other spending: “That’s nice, but what if we can’t afford it?”
This question is perfectly understandable, and it ultimately argues in favor of risk-based CV rather than against it. Using risk ratings as a foundational principle when doing CV reduces more risk per dollar than any other system.
To see why, consider the position of a security expert trying to decide where to spend some extra money, perhaps on more data checks. If that expert has performed checks based on risk assessments, he or she will be indifferent about whether to spend the additional funds on the ‘risky’ group or the ‘safe’ group, since he/she already will have accounted for the difference in the differing frequency of the checks.
This indifference in how to spend additional funds is actually an indication that the pattern of spending is optimal. In our example, the security manager can simply allocate funds in the same proportion as on earlier checks, trusting that the additional spending will have lowered risk across the board.
Using risk as the basis for continuous vetting appears to be a complex, even arcane concept at first glance. However, Haystax has found that a risk-based approach to CV simplifies philosophical questions of spending tradeoffs and bureaucratic rule-making into a series of questions with factual answers and clear implications.
If your organization is considering a CV-like system, consider using risk as its cornerstone.
# # #
Kevin Kiernan is Senior Data Scientist at Haystax.
Note: To learn more about Haystax’s data-driven, risk-based approach to continuous vetting please click here.