By Allison Lee
In Part 1 of this post we recommended using context from diverse data sources to glean additional insights into the adverse behavior of Chelsea Manning, the former U.S. Army private who disclosed a trove of classified material to Wikileaks in 2010.
Here are two additional recommendations that would lower the risk of another Manning-style crisis happening in the future.
Policies and procedures within the military rely on leaders to discern risk and, when warranted, dig deeper to assess the situation and take steps to mitigate the risk. Moreover, they are tasked with instilling resiliency so that their troops are ready when called upon to deploy – even when the timing is inconvenient or the people under their command are in crisis or don’t look like them.
Contrary to those ideals most of Manning’s incidents resulted in a slap on the wrist, or with leaders looking the other way. In reality, she should have been removed from the environment, given help or discharged – or even had her clearance revoked. Instead, she was discredited within the service and left to fend for herself.
Acknowledging that everyone has a bias, the only way to make consistent decisions and continuously reassess those decisions in a systematic way is to introduce an analytic tool that augments human reasoning. It shouldn’t replace a decision-maker but rather offer an unbiased ‘opinion’ for everyone to consider, based on dispassionate assumptions arrived at well in advance of any single incident or situation. This leads us to our third and final recommendation.
Embrace the Algorithm
A system that the organization as a whole can trust – especially one that adds critical context using available data – will give decision-makers a far less biased perspective on how troubled someone really may be. They get the whole picture, plus a chain of reasoning regarding how the risk was assessed and what their response should be, without personal feelings getting in the way.
But don’t take our word for it. Nobel Prize-winning behavioral economist Daniel Kahneman claimed in 2016 that algorithm-based decision-making is the only way to overcome costs associated with inconsistent decisions. And yet almost four years later, we appear no closer to adopting his recommendations. ‘Algorithm aversion’ is still prevalent, diminishing our chances of removing the dangerous biases that, as we have seen, opened to door to multiple incidents involving Manning and eventually a major breach in secrecy.
One common question is: “Why should we trust a black box to make our decisions?” The answer is that any analytic tool must be developed with the inputs of a multidisciplinary group of subject-matter experts, and its individual analytical components must be based on established policies and guidelines. Moreover, any algorithm used to aid humans in making decisions must transparently reveal the chain of reasoning as to how its results – say, a risk score – were derived. It should also allow for tuning if biases are discovered during testing.
One well-known example, the FICO personal credit scoring system, faced similar skepticism when it was first launched but now is the de facto standard for deciding who gets a car loan or a mortgage, and at what interest rate.
Even the U.S. Department of Defense recognizes the fundamental need for algorithmic hygiene and best practices. Its five new criteria for adopting artificial intelligence systems within the department spell out the need for the systems (and the personnel who manage them) to be responsible, equitable, traceable, reliable and governable.
How would an AI-based system have handled Chelsea Manning’s unique circumstances and life events? Moreover, what should such an analytical system have flagged up to the user? Let’s run known data about Manning through Haystax’s probabilistic model-driven Insider Threat Mitigation Suite and see what it reveals.
From the results above, we can see that the incorporation of data and personal assessments into the model has a large impact on Manning’s risk score. Looking at the model results timeline, we get to see Manning’s incidents as a whole – which puts each of her actions into context – rather than evaluating each event separately.
But how does new data impact Manning’s risk score? Our use of Bayesian inference networks allows us to be transparent on her overall risk score (green line), showing it rising and falling with each addition of new data. Correspondingly, Manning’s clearance-worthiness indicator (blue line) rises and falls in inverse proportion to the risk score and is in a low clearance-worthiness zone from about 2006 onwards.
Inference networks encode a wide variety of human behaviors and attitudes, with many serving as risk indicators for individuals who should not receive (or keep) their national security clearances. Each data source is processed and applied as evidence to the corresponding nodes within our model. Each node is linked to and influences other nodes, which ultimately impact the top-level [Clearanceworthy] node.
The model results are constantly changing as new data is ingested for analysis by the model. From the timeline above, we can see as each life event occurs there is a negative impact on Manning’s eligibility for a clearance.
An algorithm-based analytic approach would have presented an unbiased view of Manning’s situation, giving leaders enough evidence for a variety of interventions and remediation actions. The Haystax for Insider Threat Mitigation Suite would have been able to assist leaders in providing an unbiased risk assessment of Manning as her world deteriorated, flagging her as high-risk as far back as 2006. Then, she would have had her clearance revoked almost as quickly as she received it.
Such risk-based analysis can be conducted in any organization. In the private sector, security clearances are not the issue, but risks from IP theft, fraud and other commercially adverse behaviors are. For these security scenarios Haystax employs different inference networks that focus on malicious and inadvertent insider threats using data readily available to companies.
Although the data sources and model nodes might differ, the goal for companies is the same: proactively pinpoint the individuals with the highest risk of damaging the organization’s finances, personnel, data or systems in time to avert a major crisis.
# # #
Allison Lee is a Data Analyst at Haystax, a Fishtech Group business unit.
Note: After suffering a series of personal and professional setbacks, a former high-flying corporate executive gradually devolves into an insider threat. Find out how Haystax would have used probabilistic analysis and corporate data to discover him prior to his massive theft of intellectual property, in To Catch an IP Thief.