Ethics In Machine Learning: What we learned from Tay chatbot fiasco?
As Microsoft chatbot Tay showed, Machine Learning brings us into a new world where our views on ethics and political correctness will be challenged. ML learns from us. In both good and bad ways, it reflects what we really are.
By Courtney Burton, MLconf.
As machine learning matures, we’re seeing the deterioration of innocence of these systems and algorithms. Initiated by dreamers and hackers decades ago, Machine Learning has recently been disrupting and is now shaping the world economy. But does this come at a price? Machine Learning brings us into a new world where our views on ethics and political correctness will be challenged. ML learns from us. In both good and bad ways, it reflects what we really are.
Yesterday, Microsoft launched a chatbot “Tay” on Twitter to interact with the twittersphere and learn from her interactions with others. Tweeters and trolls jumped in, the initial conversations were positive and altruistic, some mentioning the trending topic of #NationalPuppyDay. However, within a few hours and thousands of tweet interactions with trolls and jokesters, Tay was spewing negative, misogynistic, racist and racey tweets. Subsequently, Microsoft had Tay say goodnight within 24 hours and removed many of her offensive tweets. This experiment raises the question- “Can AI really ever be safe if it learns from us?”
This has been on our minds, at MLconf, as well. Last month, we hosted the question on Quora: “What constraints to AI and machine learning algorithms are needed to prevent AI from becoming a dystopian threat to humanity?“ The winner of the contest, Igor Markov, responded, quoting Andrew Ng's analogy that compares AI's threat to humanity to the dangers of overpopulating Mars. There were 56 answers in total, citing science fiction, implementing stable-by-design research directions, and free-market competition among AIs to prevent any single one from attaining a monopoly. We were pleased with the community participation and responses. In fact, we’ve invited some of the participants to present their work and opinions at MLconf Seattle on May 20th.
MLconf Seattle will host a collection of talks focusing on the topic of Ethics in Machine Learning. Evan Estola, from Meetup.com, will present “When Recommender Systems Go Bad”, where he’ll cover some examples of recommendation systems that have gone wrong across various industries, as well as why they went wrong and what can be done about it. We’ve also recently confirmed a presentation by Florian Tramèr, co-author of the paper ”Discovering Unwarranted Associations in Data-Driven Applications with the FairTest Testing Toolkit” that describes FairTest, a testing toolkit that detects unwarranted associations between an algorithm’s outputs (e.g., prices or labels) and user subpopulations, including sensitive groups (e.g., defined by race or gender). We anticipate additional talks on the subject of ethics in ML, as well as talks covering general ML, NLP, Probabilistic Programming, Deep Learning, Sketching, Algorithms and more. Readers of this article mention “Ethics17” for a 17% discount on tickets to MLconf Seattle!
It’s a very exciting time for humans and machines. Humans are actively interacting with ML based systems on a daily basis. Google’s DeepMind team and their AlphaGo software achieved a crushing 4-1 victory over human Go master, Lee Sedol, in what will become the most famous Go match ever played. One of the spectators, writing for the “Go Game Guru” blog, mentioned feeling physically ill while watching AlphaGo leisurely dominate its expert human opponent. We humans aren’t accustomed to feeling like the mouse in cat-and-mouse style games.
ML is powerful and exciting technology. It’s enabling us to solve problems that seemed insurmountable before. It’s also disquieting to many people, as powerful new technology often is. We’re thrilled to be a part of the conversation and honored to have many of the best minds in the Machine Learning and AI communities attending and speaking at MLconf events.