Site icon Haystax

Security Analytics and the GDPR’s ‘Right to Explanation’

Is the European Union’s General Data Protection Regulation (GDPR) a game-changer for security analytics? The regulation enters into force on May 25, and in this regard its 99 articles and 173 recitals raise as many questions as they answer. Specifically, how are multinational companies that rely heavily on analytic software in their enterprise security and insider-threat mitigation programs ensuring they will be in compliance? The answer is that many are — or should be — making major adjustments in the types of software solutions they use to analyze personal data.

The GDPR is designed to strengthen security and privacy protections for data on the citizens of all 28 EU member states, including data held outside the EU by companies that count its citizens among their employees or customers. (Several non-EU countries are also adopting the GDPR.) This is the EU’s first significant regulatory refresh since its 1995 data protection directive, and the implications are profound.

Of particular relevance to the corporate security community is a new ‘Right to Explanation’ accorded to all EU citizens who are subject to “automated decision-making” — that is, decisions made solely with software algorithms. (Other GDPR requirements relating to data processing storage, data mapping and access, data breaches, cross-border data transfer and the like are beyond the scope of this post.)

More than one GDPR provision is related to the right to explanation, so a brief summary is in order:

I have argued for years that users of purely data-driven security analytics solutions are ill-served by those systems’ utter inability to explain why a particular decision was made. I took this position not as a response to the pending arrival of the GDPR, but because it’s simply good practice for company units that are engaged in something as consequential as security to take all possible measures to ensure their decision-making approach is analytically sound, transparent, traceable and legally and technically defensible.

In September 2016, for example, I wrote that companies seeking to build a world-class insider threat program should “avoid black boxes” like pure machine-learning solutions and deep neural networks, since their underlying analytic processes and algorithms remain unknown to the user. “Insider threat cases are sensitive personnel and corporate security issues,” I noted, “and any deployed system must provide transparency into what factors raised an individual’s risk profile, and when.” In other words, when a company censures or terminates an individual for malicious, negligent or inadvertent insider behavior, it had better be able to prove its case to company leadership, or in response to an employee appeal or wrongful termination lawsuit.

To be clear, the GDPR does not apply in certain national security and law enforcement scenarios, but that, too, accords with common practice. After all, employees in sensitive national security positions at US government agencies voluntarily waive their rights to personal privacy; company employees are under no such compunction to do so — nor should they be.

Some legal scholars contend that the GDPR’s right-to-explanation provisions have no teeth, noting for example that the words ‘right to explanation’ appear only in an unenforceable recital rather than a binding article. Others argue that the right will apply very narrowly in practice — to “significant” decisions made “solely” by automated means.

Regardless of how these provisions are applied or enforced, the EU’s underlying intent in offering citizens the means to know why they were not hired for a job, or denied a loan or fired for posing a security risk, is more than reasonable. And with fines for non-compliance reaching up to 4 percent of a company’s annual global turnover or up to €20 million (whichever is higher), what corporate leader is going to risk not complying with applicable provisions of the GDPR?

There are other existing artificial intelligence-based approaches, beyond machine learning and neural nets, that companies can adopt which provide the necessary transparency not just for GDPR compliance but for any security analytics realm where a right to explanation is the norm. For example, building probabilistic models (particularly Bayesian belief networks) to represent complex problems like insider threat detection actually forces the domain experts whose wisdom and judgments are elicited to explain their reasoning up front, in full detail, before any personally identifiable information is applied. Decisions resulting from the model-based software analytics can thus be peeled back, layer by layer, to show the entire chain of reasoning and the influence of each new piece of data on the results.

More broadly, as AI continues its unrelenting march into more products and services across more sectors of the global economy, protections relating to personal and data privacy have to keep pace. Or maybe it’s the other way around. Which could be one explanation for the recent increase in development activity surrounding so-called Explainable Artificial Intelligence (XAI) systems, which the US Defense Advanced Research Projects Agency claims should “have the ability to explain their rationale.” What citizen, or company, wouldn’t embrace that?

#   #   #

Note: A version of this article first appeared in CSO Online on January 29, 2018.

Exit mobile version