Detecting insider threats before they cause harm is, of course, a daunting challenge. In response, the US government has moved aggressively over the last five years to deploy tools on classified networks to detect potential breaches in progress, calculating that loss of classified information is its most important concern.
But do these countermeasures work? By most evidence, the focus on network-based tools or end-point sensors as a means of preserving sensitive information has been unsuccessful. Breaches are increasing significantly in size and number — the recently revealed classified information breach by an NSA contractor being just the latest example. Moreover, one study showed that over 50% of the breach alerts that do come in are false positives, taxing already overwhelmed analyst teams across multiple agencies.
To be fair, it is not possible to know how many breaches have been stopped by currently deployed tools, because most of this information has not been made public. Nonetheless, things do not appear to be moving in the right direction.
There could be many reasons for the failure. I’d like to focus on the one which I believe is the most important: the lack of a true risk-based approach to insider threat detection and deterrence. This lack of focus on risk is evident in government policy and guidance, and has led to unintended consequences — most importantly the bias towards deploying ineffective network alerting systems.
Where is the risk-based approach?
On October 7, 2011, President Obama signed the Executive Order 13587, Structural Reforms to Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information. This was the first major attempt to address security gaps in the wake of the Chelsea Manning WikiLeaks breach. While the primary focus was on securing classified networks to prevent other breaches like Manning’s from happening again, the EO also set into motion a broader insider threat requirement by standing up the National Insider Threat Task Force (NITTF). The President mandated that the NITTF establish a government-wide insider threat program, taking into account “risk levels.” What the EO meant by risk levels was not explained.
In November 2012, President Obama signed the National Insider Threat Policy and Minimum Standards for Executive Branch Insider Threat Programs. This policy set specific standards and guidelines for executive agencies and included a broader reading of insider threat — requiring that the agencies establish ‘hubs’ to analyze data from various sources, such as HR, counterintelligence, personnel security and law enforcement — thus going beyond a singular focus on classified network protection and monitoring. Noticeably absent, however, was any discussion of risk. There has been no further guidance on what the EO meant by risk levels. Instructions to the NITTF, laid out in Section C, do not mention them either.
The NITTF states that its assessments of government agency insider-threat programs take into account differing levels of agency risk, which it implies are based on such things as size of cleared population, extent of access to classified computer systems and amount of classified information. In other words, if an agency doesn’t have a lot of cleared personnel working on classified networks, the agency may not be assessed at the same strict standards. But this is different than assessing intent. For example, the US president has access to the most sensitive data; should he be considered a greater risk than someone who may be stealing drugs from evidence bins at the Drug Enforcement Agency? The NITTF definition of ‘risk-based’ appears to focus primarily on access. It does not focus on the other side of the coin, namely intent.
Theoretically, background investigations are supposed to determine a person’s possible intent. Investigations aim to figure out how vulnerable a person is to coercion, how trustworthy they are, etc. However, background investigations are ‘pass-fail,’ not graded. The person newly cleared to use a classified network is not considered riskier than an existing user, and is not monitored differently.
Continuous evaluation, which basically means constantly updating investigative data on a cleared person, is now being rolled out in some federal agencies. This will help give investigators updated information, but in the end the cleared person is either cleared to use a system or is not. There are no shades of gray. The security clearance process, therefore, is not risk-based.
We all know that banks make risk-based decisions when loaning money. This is why interest rates on credit cards differ from customer to customer. If financial institutions followed the same approach that the government uses to determine access to classified information, they would lend money at the same rates to anyone that passed, rather than failed, a credit check. Banks would have to either raise rates high enough across the board to reduce the cost of default — effectively choking off investment by good debtors — or keep rates low across the board, thus significantly increasing the cost of default. Either way, costs would be significantly higher than a world where risk-based financing is the norm.
Bias towards network monitoring
NITTF assessment criteria focus on three areas: training, process and procedure documentation, and network monitoring. We all know that training is a fairly low bar — just establish a training program in the areas outlined as important, track employees who attend, report this to the NITTF and you’ve complied with the requirement. Documentation of things like retention policies is valuable, but most organizations should already have done this years ago, as required in EO 13587.
The only assessment mandate that is tangible and difficult is the installation of “User Activity Monitoring on classified networks and procedures for protecting UAM measures and results.” Thus, many organizations believe that deploying UAM is the long pole in the tent, and if they succeed in doing so they are most of the way to compliance.
Notice that there is nothing about risk in the assessment criteria.
This emphasis, combined with increasing breaches and heightened concerns about cybersecurity, has substantially increased funding of cybersecurity projects. The money flowing into cybersecurity has obviously attracted vendors who are offering more sophisticated alert systems. The result is that cybersecurity and network monitoring are practically synonymous with insider threat. Google “insider threat tools” and you come up with long lists of cybersecurity alert systems.
What can be done?
The government is moving in the wrong direction on insider threat. As we have noted before, policy-makers need to stop and rethink their overall approach.
The best way to begin is to understand what might cause someone to become a threat in the first place, and then create a model of insider risk in the same way that the government creates risk models to predict terrorist attacks or the spread of epidemics. This model will prioritize those with highest likelihood for committing an incident, which will then inform the optimal monitoring and prevention strategies. The current focus on sensors and alerts will give way to a much more effective focus on overall risk scores and score trends.
Tom Read is Vice President for Security Analytics at Haystax Technology.