Explaining Explainable AI for Conversations

Something is missing in artificial intelligence – trust.



Explaining Explainable AI for Conversations

 

Within the space of just two or three decades, artificial intelligence (AI) has left the pages of science fiction novels and become one of the cornerstone technologies of modern-day society. Success in machine learning (ML) has led to a torrent of new AI applications that are almost too numerous to count, from autonomous machines and biometrics to predictive analytics and chatbots.   

One emerging application of AI in recent years has been conversational intelligence (CI). While automated chatbots and virtual assistants are concerned with human-to-computer interaction, CI aims to explore human-to-human interaction in greater detail. The potential to monitor and extract data from human conversations, including tone, sentiment and context, has seemingly limitless potential.  

For instance, data from call center interactions could be generated and logged, with everything from speaker ratio and customer satisfaction to call summaries and points of action being automatically filed. This would dramatically cut down the bureaucracy involved in call center handling and give agents more time to speak with customers. What’s more, the data generated could even be used to shape staff training programs, and even recognize and reward outstanding work.   

But there’s something missing – trust. Deploying AI in this way is incredibly useful, but at the moment it still requires a leap of faith on behalf of the businesses using it.   

 

In Artificial Intelligence we Trust?  

 

As businesses, and as a society at large, we place a great deal of trust in AI-based systems. Social media companies like Twitter now employ AI-based algorithms to clamp down on hate speech and keep users safe online. Healthcare providers around the world are increasingly leveraging AI, from chatbots that can triage patients to algorithms that can help pathologists with more accurate diagnoses. The UK government has recently adopted an AI tool known as “Connect” to help parse tax records and detect fraudulent activity. There are even examples of AI being used to improve law enforcement outcomes, using tools such as facial recognition, crowd surveillance and gait analysis to identify suspects.   

We make this leap of faith in exchange for a more efficient, connected and seamless world. That world is built on “big data”, and we need AI to help us manage the flow of that data and put it to good use. That’s as true in a macro sense as it is for individual businesses. But despite our increasing dependence on AI as a technology, we know precious little about what goes on under the hood. As data volume increases, and the paths taken by AI to make a determination become more elaborate, we as humans have lost the ability to comprehend and retrace those paths. What we’re left with is a “black box” that’s next to impossible to interpret.   

It begs the question; how can we trust AI-based decisions if we can’t understand how those decisions are made? It’s an increasing source of frustration for businesses that want to ensure their systems are working correctly, meeting the correct regulatory standards, or that they’re operating at maximum efficiency. Consider the recruitment team at Amazon, who had to scrap their secret AI recruiting tool after they realized it was showing bias against women. They thought they had the “holy grail” of recruiting – a tool that could scan hundreds of resumes and pick out the top several for review, saving them countless hours of work. Through repetition and reinforcement, the AI managed to convince itself that male candidates were somehow preferable to female ones. Had the team trusted blindly in the AI – which they did for a very short period – the consequences for the company would have been devastating.   

When it comes to business frustration and the fear of putting too much trust in AI, the emerging field of CI is an ideal case in point.   

 

How can Conversational Intelligence be Trusted?  

 

The world of human interaction has been a hive of AI innovation for years. It’s one thing to use natural language processing (NLP) to create chatbots or transcribe speech-to-text, but it’s another entirely to derive meaning and understanding out of conversations. That’s what conversation intelligence (CI) does. It goes beyond deterministic “A to B” outcomes and aims to analyze less tangible aspects of conversations such as tone, sentiment and meaning.   

If CI is employed in a call center, for instance, it might be used to determine the effectiveness of the call handler, the emotional state of the customer, or provide an automatic call summary with action points. These are sophisticated and subjective interactions that don’t necessarily have right or wrong interpretations. If a call center is going to use CI to streamline interactions, train agents and update customer records, it needs to have confidence that the underlying AI is doing its job effectively. That’s where explainable AI or “XAI” comes into play.   

Every business is different, and has a different definition of what the system is supposed to learn and predict using their conversation intelligence stack. And it’s essential that the solution provides a complete view of the predictions respective to the human actors using the system, so that they can continuously approve or disapprove the predictions made by the system. Instead of adopting a black boxed deep learning based system to perform tasks, a modularized system where there's full transparency and control on each aspect of the predictions of the system is critical. For example, a deterministic programmable system can be used to use separate systems for tracking sentiment of a call, finding topics, generating summary, detecting specific aspects such as type of issue in a support call, or requests in customer feedback calls, etc. instead of a singular deep learning system doing all these things. By creating such a modular architecture, the overall conversation intelligence solution is built to be traceable and deterministic. 

 

Pulling Back the Curtain  

 

When AI processes were simple and deterministic, trust in those processes was never an issue. Now that those processes have become more complex and less transparent, as in the example of CI above, trust has become essential for businesses that want to invest in AI. In his still-relevant decade-old paper, Mariarosaria Taddeo, referred to this as “e-trust” – how humans trust in computer-based processes, and the extent to which we allow artificial agents to be involved in that relationship.   

Explainable AI (XAI) is an emerging field in machine learning which aims to make those artificial agents fully transparent and easier to interpret. The Defense Advanced Research Projects Agency (DARPA) in the US is one of the leading organizations pursuing XAI solutions. DARPA argues that the potential of AI systems is being severely hampered by its inability to explain its actions to human users. In other words, a lack of trust from organizations is preventing them from exploring the full gamut of what AI and ML could offer.   

The goal is to create a suite of machine learning techniques that can produce explainable models that allow human users to understand and manage the next generation of artificially intelligent solutions. These ML systems will be able to explain their rationale, recognize their own strengths and shortcomings, and convey how they will “learn” from the data they are being fed. For DARPA, it’s part of a push toward what it refers to as the third generation of AI systems, where machines will understand the context and environment in which they are operating.   

For the potential of AI to be fully realized, we need to move on from ones and zeroes and introduce more subjective analysis. The technology is there, we just need more reason to trust it. 
 
 
Surbhi Rathore is the CEO and co-founder of Symbl.ai. Symbl is bringing to life her vision for a programmable platform that empowers developers and businesses to monitor, act, and comply with voice, video conversations at scale in their products and workflows without building their in-house data science expertise.