Gold BlogWill GDPR Make Machine Learning Illegal?

Does GDPR require Machine Learning algorithms to explain their output? Probably not, but experts disagree and there is enough ambiguity to keep lawyers busy.



GDPRThe EU General Data Protection Regulation, known as GDPR, is the most important change in data privacy regulations in 21st century, and it is taking effect very soon, on May 25, 2018.

It will have a significant impact on many aspects of data collection and processing of data of EU citizens, and will affect not only EU companies but also multinationals that operate in EU.

One possible and significant effect of GDPR on Machine Learning is the "right to explanation".

Some of the articles of GDPR can interpreted as requiring explanation of the decision made by a machine learning algorithm, when it is applied to a human subject.

UW Prof. Pedro Domingos, a leading AI researcher, started a firestorm with his tweet

Does GDPR really require an explanation of Machine Learning algorithms?

I note that we should distinguish
  1. Global explanation: how a Machine Learning algorithm works (which may be very hard for complicated methods like Deep Learning) and
  2. Local explanation: what factors contributed to a particular decision impacting a specific person (easier). There are already some algorithms like LIME: Local Interpretable Model-Agnostic Explanations, which can explain the predictions of any machine learning classifier.
    If, for example, a person is declined mortgage, should she know which factors contributed to the decision? On the one hand, if you are denied something by the algorithm, you want to know why and have a chance to appeal. On the other hand, enough such explanations could allow a decision boundary to be reverse-engineered and allow potential evil doers to game the system. This is very undesirable in many cases (e.g. security applications).


I asked Sandra Wachter Dr. Sandra Wachter, an EU Lawyer, and a Research Fellow in Law and Ethics of Big Data, AI & Robotics, at Oxford (@SandraWachter5 on Twitter).

She said that GDPR requires data controllers to implement suitable measures to safeguard data subjects rights freedoms and legitimate interests. Such measures should include a way for the data subject to obtain human intervention, express their point of view, and contest the decision.

Also her opinion was that Article 15 implies a more general form of oversight, rather than a right to an explanation of a particular decision.

So a right to explanation in GDPR is not legally binding, but can be offered voluntary.

Here are excerpts from Sandra's blog Towards accountable AI in Europe?, reposted with permission.
AI and its challenges

AI based systems are often opaque 'black boxes' and are difficult to scrutinise. As increasingly more of our economic, social and civic interactions - from credit markets and health insurance applications, to recruitment and criminal justice systems - are carried out by algorithms, concerns have been raised about the lack of transparency behind the technology, which leaves individuals with little understanding of how decisions are made about them. We need proper safeguards in place to make sure that the decisions that are being made about us are actually fair and accurate.

AI and the EU's General Data Protection Regulation

In 2016 the EU General Data Protection Regulation (GDPR),  Europe's new data protection framework, was approved. The new regulation will come into force across Europe - and the UK - in 2018. It has been widely and repeatedly claimed that a 'right to explanation' of all decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the new regulation. This "right to explanation" is viewed as an ideal mechanism to enhance the accountability and transparency of automated algorithmic decision-making.

Such a right would enable people to ask how a specific decision (e.g. being declined insurance or being denied a promotion) was reached.

An explanation can be offered in various ways. There are at least two possible algorithmic explanations: an explanation of "system functionality" and an explanation about the "rationale" of an individual decision. Explaining the algorithmic methods used to assess the credit-worthiness or to set interest rates (system functionality) does not have the same quality as an explanation of "how" a certain rate was set or "why" a credit card application was declined.

Together with Turing Researchers Dr. Brent Mittelstadt and Prof. Luciano Floridi we examined this claim. Unfortunately, contrary to what was hoped, our research has revealed that the GDPR is likely to only grant individuals information about the existence of automated decision-making and about "system functionality", but no explanation about the rationale of a decision. In fact, in the whole GDPR the "right to explanation" is only mentioned once in the regulation, in Recital 71, which lacks the legal power to establish stand-alone rights. The purpose of a Recital is to provide guidance on how to interpret the operational part of a regulatory framework, if there is ambiguity. But in our research,  I see there is no ambiguity regarding the minimum requirements that requires further clarification.

Placing the "right to explanation" in a Recital and the fact that the recommendation of the European Parliament to make this right legally binding was not adopted, suggests that European legislators did not want to grant this idea the same legal status as the other safeguards in the legally binding text in Art 22 GDPR. Of course, that does not mean that data controllers could not voluntarily decide to offer explanations, or that future jurisprudence or law built on this Recital could create such a right in the future.


For more details, listen to a podcast with Sandra Wachter on Algorithms, Explanations, and the GDPR .

One possible solution to explanation is Counterfactuals, for example
You were denied a loan because your annual income was £30,000. If your income had been £45,000 you would have been offered a loan.


The paper Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, by Sandra Wachter, Brent Mittelstadt, and Chris Russell, shows how we can give meaningful explanations to people even when highly complex systems are used and without the need to understand the internal logic of the algorithm. The use of counterfactuals is also less likely to infringe trade secrets. See also an interview about counterfactuals.

However, Sandra's opinion that there is no right to explanation in GDPR is not fully shared by other experts.

Andrew D. Selbst and Julia Powles write in Meaningful information and the right to explanation,
There is no single, neat statutory provision labelled the 'right to explanation' in Europe's new General Data Protection Regulation (GDPR). But nor is such a right illusory.

Articles 13-15 provide rights to 'meaningful information about the logic involved' in automated decisions. This is a right to explanation, whether one uses the phrase or not.


Andrew Burt in Is there a 'right to explanation' for machine learning in the GDPR?, writes
As in other areas, the GDPR is less than clear. And as a result, the idea that the GDPR mandates a "right to explanation" from machine learning models - meaning that those significantly affected by such models are due an accounting of how the model made a particular decision - has become a controversial subject. Some scholars, for example, have spoken out vehemently against the mere possibility that such a right exists. Others, such as the UK's own Information Commissioner's Office, seem to think the right is pretty clearly self-evident.

Ultimately, I have some good news for lawyers and privacy professionals . . . and some potentially bad news for data scientists.



It seems that there is sufficient ambiguity in GDPR to keep lawyers very busy.

Stay tuned!

Related: