Closing the Gap Between Human Understanding and Machine Learning: Explainable AI as a Solution

This article elaborates on the importance of Explainable AI (XAI), what the challenges in building interpretable AI models are, and some practical guidelines for companies to build XAI models.



Closing the Gap Between Human Understanding and Machine Learning: Explainable AI as a Solution
Image by Bing Image Creator

 

Introduction

 

Have you ever opened your favorite shopping app and the first thing you see is a recommendation for a product that you didn’t even know you needed, but you end up buying thanks to the timely recommendation? Or have you opened your go-to music app and been delighted to see a forgotten gem by your favorite artist recommended right on the top as something “you might like”? Knowingly, or unknowingly, all of us encounter decisions, actions, or experiences that have been generated by Artificial Intelligence (AI) today. While some of these experiences are fairly innocuous (spot-on music recommendations, anyone?), some others might sometimes cause some unease (“How did this app know that I have been thinking of doing a weight loss program?”). This unease escalates to worry and distrust when it comes to matters of privacy about oneself and one’s loved ones. However, knowing how or why something was recommended to you can help with some of that unease. 

This is where Explainable AI, or XAI, comes in. As AI-enabled systems become more and more ubiquitous, the need to understand how these systems make decisions is growing. In this article, we will explore XAI, discuss the challenges in interpretable AI models, advancements in making these models more interpretable and provide guidelines for companies and individuals to implement XAI in their products to foster user trust in AI. 

 

What is Explainable AI?

 

Explainable AI (XAI) is the ability of AI systems to be able to provide explanations for their decisions or actions. XAI bridges the important gap between an AI system deciding and the end user understanding why that decision was made. Before the advent of AI, systems would most often be rule-based (e.g., if a customer buys pants, recommend belts. Or if a person switches on their “Smart TV”, keep rotating the #1 recommendation between fixed 3 options). These experiences provided a sense of predictability. However, as AI became mainstream, connecting the dots backward from why something gets shown or why some decision is made by a product isn’t straightforward. Explainable AI can help in these instances.

Explainable AI (XAI) allows users to understand why an AI system decided something and what factors went into the decision. For example, when you open your music app, you might see a widget called “Because you like Taylor Swift” followed by recommendations that are pop music and similar to Taylor Swift’s songs. Or you might open a shopping app and see “Recommendations based on your recent shopping history” followed by baby product recommendations because you bought some baby toys and clothes in the recent few days.

XAI is particularly important in areas where high-stakes decisions are made by AI. For example, algorithmic trading and other financial recommendations, healthcare, autonomous vehicles, and more. Being able to provide an explanation for decisions can help users understand the rationale, identify any biases introduced in the model’s decision-making because of the data on which it is trained, correct errors in the decisions, and help build trust between humans and AI. Additionally, with increasing regulatory guidelines and legal requirements that are emerging, the importance of XAI is only set to grow.

 

Challenges in XAI

 

If XAI provides transparency to users, then why not make all AI models interpretable? There are several challenges that prevent this from happening. 

Advanced AI models like deep neural networks have multiple hidden layers between the inputs and output. Each layer takes in the input from a previous layer, performs computation on it, and passes it on as the input to the next layer. The complex interactions between layers make it hard to trace the decision-making process in order to make it explainable. This is the reason why these models are often referred to as black boxes. 

These models also process high-dimensional data like images, audio, text, and more. Being able to interpret the influence of each and every feature in order to be able to determine which feature contributed the most to a decision is challenging. Simplifying these models to make them more interpretable results in a decrease in their performance. For example, simpler and more “understandable” models like decision trees might sacrifice predictive performance. As a result, trading off performance and accuracy for the sake of predictability is also not acceptable. 

 

Advancements in XAI

 

With the growing need for XAI to continue building human trust in AI, there have been strides in recent times in this area. For example, there are some models like decision trees, or linear models, that make interpretability fairly obvious. There are also symbolic or rule-based AI models that focus on the explicit representation of information and knowledge. These models often need humans to define rules and feed domain information to the models. With the active development happening in this field, there are also hybrid models that combine deep learning with interpretability, minimizing the sacrifice made on performance. 

 

Guidelines to Implement XAI in Products

 

Empowering users to understand more and more why AI models decide what they decide can help foster trust and transparency about the models. It can lead to improved, and symbiotic, collaboration between humans and machines where the AI model helps humans in decision-making with transparency and humans help tune the AI model to remove biases, inaccuracies, and errors.

Below are some ways in which companies and individuals can implement XAI in their products:

  1. Select an Interpretable Model where you can – Where they suffice and serve well, interpretable AI models should be selected over those that are not interpretable easily. For example, in healthcare, simpler models like decision trees can help doctors understand why an AI model recommended a certain diagnosis, which can help foster trust between the doctor and the AI model. Feature engineering techniques like one-hot coding or feature scaling that improve interpretability should be used. 
  2. Use Post-hoc Explanations – Use techniques like feature importance and attention mechanisms to generate post-hoc explanations. For example, LIME (Local Interpretable Model-agnostic Explanations) is a technique that explains the predictions of models. It generates feature importance scores to highlight every feature’s contribution to a model’s decision. For example, if you end up “liking” a particular playlist recommendation, the LIME method would try to add and remove certain songs from the playlist and predict the likelihood of your liking the playlist and conclude that the artists whose songs are in the playlist play a big role in your liking or disliking the playlist. 
  3. Communication with Users – Techniques like LIME or SHAP (SHapley Additive exPlanations) can be used to provide a useful explanation about specific local decisions or predictions without necessarily having to explain all the complexities of the model overall. Visual cues like activation maps or attention maps can also be leveraged to highlight what inputs are most relevant to the output generated by a model. Recent technologies like Chat GPT can be used to simplify complex explanations in simple language that can be understood by users. Finally, giving users some control so they can interact with the model can help build trust. For example, users could try tweaking inputs in different ways to see how the output changes. 
  4. Continuous Monitoring – Companies should implement mechanisms to monitor the performance of models and automatically detect and alarm when biases or drifts are detected. There should be regular updating and fine-tuning of models, as well as audits and evaluations to ensure that the models are compliant with regulatory laws and meeting ethical standards. Finally, even if sparingly, there should be humans in the loop to provide feedback and corrections as needed.

 

Conclusion 

 

In summary, as AI continues to grow, it becomes imperative to build XAI in order to maintain user trust in AI. By adopting the guidelines articulated above, companies and individuals can build AI that is more transparent, understandable, and simple. The more companies adopt XAI, the better the communication between users and AI systems will be, and the more users will feel confident about letting AI make their lives better
 
 
Ashlesha Kadam leads a global product team at Amazon Music that builds music experiences on Alexa and Amazon Music apps (web, iOS, Android) for millions of customers across 45+ countries. She is also a passionate advocate for women in tech, serving as co-chair for the Human Computer Interaction (HCI) track for Grace Hopper Celebration (biggest tech conference for women in tech with 30K+ participants across 115 countries). In her free time, Ashlesha loves reading fiction, listening to biz-tech podcasts (current favorite - Acquired), hiking in the beautiful Pacific Northwest and spending time with her husband, son and 5yo Golden Retriever.