How Not to Regulate the Data Economy
The GDPR will affect not just tech companies but any company that handles customer data — in other words, every company. And it will affect the use of data throughout the world, not just in Europe...
By Pedro Domingos, U. Washington.
On May 25, the European Union will usher in what it believes will be a new era in the digital economy. Europe’s new legislation, known as the General Data Protection Regulation (or GDPR for short) aims to end today’s digital Wild West with a set of severe restrictions on the use of data. The GDPR will affect not just tech companies but any company that handles customer data — in other words, every company. And it will affect the use of data throughout the world, not just in Europe, because disentangling the two will often not be practical. The GDPR is also being hailed as an example for America and others to follow.
One of the GDPR’s main provisions stipulates that data can only be used for the purpose it was originally gathered for, and with users’ explicit permission. This will destroy much, if not most, of the data’s value, because the most important breakthroughs, from X-rays to penicillin, often come from unforeseen uses of data. The GDPR has exceptions for scientific research, but it still risks becoming a serious obstacle to innovation, precluding many uses of data that are beneficial to customers without affecting their privacy. For instance, tech companies carry out thousands of experiments every day to improve their Web sites. Requiring users’ explicit permission for each would render them impossible, unless the permission is given for categories so general as to be mostly vacuous.
The GDPR creates a new right, known as the “right to explanation”, by requiring that every algorithm that makes consequential decisions be able to justify them. This seems reasonable: who doesn’t want an explanation for, say, a cancer diagnosis? But it’s potentially disastrous, because there’s often a tradeoff between accuracy and explainability. I would rather be diagnosed by an algorithm that is 90 percent accurate and gives no explanations than by one that is 80 percent accurate and does. Different people will make different choices in different situations. Why should the government impose the same one on everyone?
Another right enshrined in the GDPR is the so-called “right to be forgotten”. Users can ask Google, for example, to suppress information about them from search results. But one person’s right to be forgotten infringes everyone else’s right to remember. Under Europe’s model, it’s Google that decides what should be forgotten. Is that really what we want? And if it’s governments that decide, will they resist the temptation to use it as a pretext for censorship?
Some of the GDPR’s provisions are less controversial. For example, portability — making it easy to move data from one platform to another — is important for the digital economy to prosper. But it’s not clear it needs to be imposed by law at this early stage, and the GDPR fails to distinguish between user data and company data. It’s one thing for me to be able to move my photo collection, or music I’ve bought, from (say) Apple’s cloud to Google’s. It’s another to require that the entire record of an individual’s interaction with a company be movable or erasable on demand. This record is in the first instance property of the company, and is often where much of its value resides. Better to make it easy for users to have their own record of the same interactions, which they can use as they see fit.
Underlying all this is the fact that the GDPR is a solution to a largely non-existent problem. When people are asked how much they care about privacy in abstract terms, they naturally say that they care a lot. But when we look at how they make specific tradeoffs between, for example, privacy and personalization, the implied value of privacy is very low. How many of us have been harmed by data sharing, as opposed to data theft? It’s easy to get worked up about Cambridge Analytica’s use of Facebook data, but there’s no evidence that it had any success in influencing voters. The GDPR assumes that companies can’t self-police, but they do, because they’re afraid of customer backlash. Facebook, for one, learned some lessons the hard way long before Cambridge Analytica.
And perversely, given the growing concerns about the concentration of power in the hands of a few large tech companies, the GDPR will be disproportionately burdensome for small ones, making it harder for startups to survive and grow until they can challenge the Googles and Facebooks. Europe, for one, could really use more of them.
Europeans like to think of Americans as cowboys who shoot first and ask questions later. But when it comes to the digital economy, European regulators are the trigger-happy ones, and with the GDPR they have shot themselves in the foot. America should offer its condolences and give GDPR-like regulation a wide berth.
Bio: Pedro Domingos, @pmddomingos, a leading researcher in Machine Learning, is a Professor of Computer Science at U. Washington and the author of “The Master Algorithm” best-selling book on Machine Learning.
Original. Reposted with permission.
- How to build analytic products in an age of data privacy
- What Does GDPR Mean for Machine Learning?
|Top Stories Past 30 Days|