The Unintended Consequences of Machine Learning

But with great power comes great responsibility. Let me tell you a story about the unintended consequences of well-meaning machine learning research.

By Frank Kane, Sundog Education.

AI header

These are heady times in machine learning and artificial intelligence; new algorithms, TensorFlow, and clusters of powerful GPU’s are combining to produce powerful systems that can do things like beat the world’s best Go player.

But with great power comes great responsibility. Let me tell you a story about the unintended consequences of well-meaning machine learning research.

The year was 2010. I had been working on’s personalization technology for the past seven years. You know, the recommender systems that sell you stuff you never knew existed based on your past interests and purchases, and generate a sizable percentage of Amazon’s revenue.

This was the year Eli Pariser first coined the term “filter bubble.” He warned us that too much personalization could leave people in a bubble that just keeps reinforcing the same interests and beliefs. This talk proved prophetic; today there’s much debate about the role filter bubbles in social media played in polarizing our modern society, and the role it’s played in politics.

In 2010, I completely dismissed Eli’s warnings. There was no way algorithms intended to help you discover new books and music could have that sort of an impact on society, I thought. But I was wrong. I still manage to sleep at night because Eli’s concern was personalization at Facebook and Google, not at Amazon – but I still feel a little bit complicit, as Amazon was a pioneer in this field.

Fast-forward seven years to 2017, and you see the same hand-wringing over artificial intelligence that we once had over personalization. There’s a lot of excitement as we build self-driving cars that actually work, or outsmart humans at games they said machines could never master. But these technologies also have unintended consequences. Set aside the speculation about the “singularity” – today, what would happen if a cyber-terrorist got his hands on the system you’re building? Could your neural network be trained to, say, break into weapons control systems or power grids? Just as I never foresaw algorithms built to find new science-fiction books for me influencing world politics, you might not foresee the technology you’re creating playing a role in a civilization-threatening attack.

Does this mean we should all just go become web developers instead for the greater good? Well, no. It would also be wrong to stop the advancement of technology. But there are decisions we can make to try and keep the technology we develop in the right hands. Even though my business today is inexpensive online training in machine learning, I’ve considered not producing courses on AI at all, because perhaps that sort of knowledge shouldn’t be easy for bad guys to obtain. Maybe you should think twice about how much information you give out at conferences, or in open source form. That cool graphical tool you're building that lets anyone set up a neural network on a cluster for any purpose they want? Maybe that's really not such a great thing to unleash upon the public.

There is a rationalization along the lines of “AI doesn’t kill people, people with AI kill people.” But do you really want to end up second-guessing the role you played in the next big AI-powered cyber-attack in a few years? I’ve been there, in a way, and it’s not fun.

Bio: Frank Kane is the founder of Sundog Education, which has enrolled 100,000 students worldwide in its inexpensive, online video training courses for big data, machine learning, and data science. Prior to Sundog, Frank spent 9 years at and in engineering and managerial roles focusing on personalization and recommender system technology.