Interacting with Machine Learning – Here is Why You Should Care
The issue of designing new interactive interfaces with machine learning systems that best serve our needs and help us build and maintain trust is a central issue in AI. Read one researcher's take on this topic.
By Elena Ikonomovska, Nuntio Labs Inc.
For common readers or for experts, the topic of machine learning is one that more often than not brings up lengthy heated discussions, with eyes turning and heads shaking in disagreement. No wonder why... Mounds of private information are being collected by giant corporations, stored in private data silos, and exposed to us only through creepy and yet insightful automated recommendations and suggestions.
Like it or not, machine learning has entered our lives boldly and is here to stay. In the voice of Siri, in our search engines, in systems that protect us from frauds and intrusions, in applications that understand our emotions, and the list goes on and on… These days, my phone auto completes almost all information about my new contacts and meetings. I can almost feel a growing discomfort with that thought and I know I’m not alone.
Sure, we all love that magic, it makes our lives easier, but for some reason there is that feeling of discomfort that we can’t get rid off. As a researcher and as a practitioner I struggle to know why.
I had a conversation with my sister the other day. I listened to her as she raged against AI driven bots.
“These bots, I don’t like how they treat me.“
“What do you mean how they treat you?“
My sister goes on.
“They simply don’t care if I want to talk or not. So rude! I can’t get them to stop.“
She is right. These are interfaces built by human developers and designers, and yet we have somehow failed to understand the basic human need when interacting, and that is to understand, being understood and know we can trust them. Without the help of emotions however, without proper explanations and without socially acceptable patterns of behavior, it is difficult to trust even the most benevolent bot.
The problem is not simple. There are many aspects of it and today I will touch upon only one topic: The issue of designing new interactive interfaces with machine learning systems that best serve our needs and help us build and maintain trust.
This question is addressed by researchers that work in the intersection of human computer interaction and machine learning. Last week I attended the Human Computer Interaction conference in San Jose and had the chance to discuss this topic with some of the speakers. Here are my main takeaways and favorite ideas.
With the ever-growing presence of sensors and IoT devices in our lives, there is a growing need for urging the citizen to attain a level of data literacy. In the case of smart city applications, there is the role of the maker or the stakeholder, who has good insight into his problems, but doesn’t have the needed understanding of what machine learning is and how it can help him. There is also the consumer, who wants his data to be protected and to not be misused. There is the expert who is able to use machine learning for other people’s needs but doesn’t understand their goals. As daily interactions with data become an evermore commonplace data literacy is more of a life skill we all need to learn.
Data literacy calls for placing larger importance on building the right interfaces and tools that will help us become more comfortable with data. Tools that will let us easily understand our environment through data and be able to control it the ways suits us best. From governmental to environmental data, the list extends through thousands of important examples.
An inspiring approach to this problem is the Physikit system, designed to allow users to explore and engage with environmental data through physical ambient visualizations. The physical cubes which are part of the Physikit kit, show values and changes in environmental data in four different ways, by light, vibration, movement and air. This is visual and clear, and yet ambient and unobtrusive.
By combining the Physikit with a sensor platform such as SmartCitizen, it can show how sensor readings around air quality, temperature, and noise levels, change over time and let families control the environment in their homes in a natural and playful way. For example, one family chose to program the light cube to glow intensely when the noise levels in the living room were very high due to kids playing video games.
Professor Mark Reidl from the Georgia Institute of Technology proposes an interesting idea: Machine enculturation. Let’s teach machines acceptable social behaviors through storytelling and let us tell them what is truly important for us.
Now, this is a compelling idea. We all remember the stories our grandmas told us holding us tight in their embrace. Storytelling is the greatest invention of the human mind for the purpose of gaining understanding. It is how we teach, learn and communicate.
“Stories have the ability to break down walls, to get us to care, to make us think differently, and in so doing, to ignite the fires of change. Good stories mean something to us: They ground us in truth.” — Isaac
By telling stories we can easily convey to machines even the most complex ideas. We can teach them what are socially acceptable behaviors and interact with them in a more natural way. The flip side of understanding stories is creating stories. What if we go one step further and teach machines how to tell stories? Is that even possible?
This is a problem that falls under research in computational narrative intelligence. Narrative intelligence is the ability to craft, tell, understand, and respond to stories. The goal of this research is to create computational intelligence that can answer questions about stories, generate fictional stories or news articles, respond, and represent the knowledge contained in natural language narratives. It’s an old problem that people have been working on for nearly three decades.
Meet IBM Watson - the computer that can answer questions just about any topic. Answering questions is considered to be a way of verifying that the computer has learned something. However, answering questions about a narrative or a story is much more challenging than answering questions about a topic. Partly because of the many possible causal and temporal relationships between events in a story.
Despite the importance of storytelling in the whole human-computer experience, machines still can’t reliably create new stories or understand stories created by humans.
What does it matter? - you might ask.
I can think of at least dozen of reasons why it matters. For one, such interfaces equipped with narrative intelligence would be much more effective at communicating with us and understanding our needs and desires. Understanding how a human might respond to a narrative can help us build machines that won’t make us upset or anxious. An example application is a social robot that teaches language to young children.
My favorite application of narrative intelligence is helping us understand the inner workings of artificial intelligence systems. If any procedure can be told through a narrative or a story, the AI can explain or describe with simple words how it came to a particular conclusion or derived a particular result. This will help both researchers and non-experts understand what is going on and feel more comfortable with the results.
Personal Learning Systems
How would you feel if I told you that these days you can have your own personal machine learning system, without having to know much about machine learning and still be able to command it and fine-tune it by yourself. At the interactive demo session at CHI2016 I had the chance to play with the GaussBox.
The GaussBox is an innovation by Jules Françoise from Simon Fraser University, as an interactive interface to a learning system that models human gestures, by creating a number of Hidden Markov Models, each responsible for recognizing only one specific gesture.
The GaussBox is designed as a pedagogical tool. For that purpose it shows us in details the inner-workings of the machine learning system and lets us fine-tune these models according to our preference. In particular, the GaussBox shows not only the likelihood of recognizing the gesture in real-time, but also the activation of each state of the Hidden Markov Model. For a machine learning expert or someone familiar with the HMM methodology this is very insightful and can serve as a beautiful teaching tool.
However, what I found truly interesting is that the interface is enabling a non-expert user to train the system with it’s own choice of gestures and with a preferred level of sensitivity in a very intuitive and natural way. This idea came as a result of a collaboration with musicians interested to have a gesture controlled music system that enables them to play and compose music on the fly.
While the non-expert doesn’t necessarily fully understand the details of the machine learning model, she can still interact with it and observe changes in its behavior by changing the parameters of the model. This, in my opinion creates comfort for the end user and a sense of control over the software, and as such it is a great example of how we should think about designing interfaces to machine learning systems.
Would you be interested in a follow up post? We could dig deeper into the issues that come up in different social situations and the uncrossed barriers for AI software in our homes, our offices and our lives.
Bio: Elena Ikonomovska is a data scientist, co-founder & CTO of Nuntio Labs Inc., providing machine learning expertise for business success. She holds a PhD in machine learning and is currently working on a new interactive ML platform. Ask her about it!
Original. Reposted with permission.
- Deep Feelings On Deep Learning
- Beyond the Fence, and the Advent of the Creative Machines
- Let Me Hear Your Voice and I’ll Tell You How You Feel