Submit a blog to KDnuggets -- Top Blogs Win A Reward

Topics: AI | Data Science | Data Visualization | Deep Learning | Machine Learning | NLP | Python | R | Statistics

KDnuggets Home » News » 2021 » May » Opinions » What Makes AI Trustworthy? ( 21:n18 )

What Makes AI Trustworthy?


This blog pertains to the importance of why AI needs to be trustworthy as well as what makes it trustworthy. AI predictions/suggestions should not just be taken at face value, but rather delved into at a deeper level. We need to understand how an AI system makes its predictions to put our trust in it. Trust should not be built on prediction accuracy alone.



By Ronel Sylvester, ML Engineer at Predmatic



Image by Gerd Altmann from Pixabay

 

How do you decide if a decision you/someone made was a good decision? Do you just check and see if the outcome was favorable and good? I would hope not.

With this simple logic, if I wanted to get more energy, eating a handful of candy would be a great means to do so since it is cheap and quick (not to mention tasty). Although the initial outcome is the outcome I was looking for, the overall outcome is actually a burst of energy for less than an hour, an energy crash, a possible upset stomach, and more fat to work off that ironically requires energy to do so. Hence, a favorable easy outcome that was actually a bad decision.



Image by Rudy and Peter Skitterians from Pixabay

 

In this example, I didn’t take into account all the possible variables when making my decision. Rather, I simply focused on a singular variable (quick energy) that doesn’t tell the whole story of my final outcome. The goal of this example is to showcase the importance of being able to explain our reasoning to ourselves or others for a certain decision we made. Without this, making decisions for anything in our lives would be based on naive and incomplete pictures/processes.


Why shouldn’t we expect the same from AI?


 

Explaining AI Allows us to Trust AI

 



Image by Thomas B. from Pixabay

 

Trust is an important aspect of how we humans interact with one another. According to Psychology Today, here are a few key facets that can define trust:


1) Trust is a set of behaviors, such as acting in ways that depend on another.

2) Trust is a belief in a probability that a person will behave in certain ways.

3) Trust is an abstract mental attitude toward a proposition that someone is dependable.

Thagard, Paul. “What Is Trust?” Psychology Today, Sussex Publishers, 9 Oct. 2018, www.psychologytoday.com/us/blog/hot-thought/201810/what-is-trust.


If we just took point 1 and 3 here, it would be easy to conclude that trust = dependability (solely). However, point 2 fulfills the definition of trust because it focuses on how the person makes their dependable decisions.

As AI systems become more popular in today’s business and society, it is important that we build them in a trustworthy manner. We can apply these 3 definers of trust in how we depict the trustworthiness of an AI model. To me, the keywords of this trust definition are quite fascinating: probability and depend. Let’s redefine the definition in terms of implementing AI in enterprise use cases:

  1. Can we depend on our AI’s performance?
  2. Can we explain/understand the probabilistic ways an AI will behave?

 

Evaluating AI Performance

 



Image by Manfred Richter from Pixabay

 

A simple yet unfortunately popular approach to understanding the performance of an AI model in today’s world would be as follows: “Has the AI systems’ accuracy performed well since implementation?”

Although this should definitely be one of the approaches for understanding the performance of an AI model, it misses out on the why/how aspect. This is why the two* aforementioned questions above should always go hand in hand when checking for performance.

Question 1 (Can we depend on our AI’s performance?) focuses on how well an AI has performed over a certain amount of time. It also gives room for discussion as to why the AI is performing poorly for certain data. This leads perfectly into question 2 (Can we explain/understand the probabilistic ways an AI will behave?). Not only should we be checking why the model struggles with certain data, but we must also understand why the model performs well on the rest of the data.

 

Why is this Important?

 



Image by succo from Pixabay

 

At the end of the day, AI is becoming more popular than ever. The use-cases are broad and adaptive, and the benefits are incalculable. However, as a byproduct, we are therefore also trusting AI more, with or without knowing it. Unfortunately, we have not been as rigorous with trusting AI as we tend to be with our fellow human peers. We must be however if we want AI systems that can be trusted and depended on.

The worst-case scenario for any business when using AI is that new data is inputted that the AI cannot predict well on. If we blindly trust in our AI, it will go on predicting as it pleases until someone notices the vastly incorrect predictions it is outputting. For small enterprises, the financial/social dent may be small due to incorrect predictions. However, for larger enterprises, the effects could be devastating.

Therefore, it is important that we constantly check our AI systems for biases in addition to understanding how our model is actually working on the inside. This is where Explainable AI (XAI) and libraries such as LIME or SHAP come in, which we will discuss in a future blog.

Please do share your thoughts about this whole idea of putting our trust in AI, would love to hear what the community thinks.

 

Further Readings

 
When Do We Trust AI's Recommendations More Than People's?
More and more companies are leveraging technological advances in machine learning, natural language processing, and…

 
If Your Company Uses AI, It Needs an Institutional Review Board
Conversations around AI and ethics may have started as a preoccupation of activists and academics, but now - prompted…

 

AI Can Outperform Doctors. So Why Don't Patients Trust It?
Our recent research indicates that patients are reluctant to use health care provided by medical artificial…

 

 
Bio: Ronel Sylvester is an ML engineer at Predmatic, with experience in audio classification, deep learning, computer vision/image processing, and forecasting. Predmatic is a data science and artificial intelligence based consulting firm, which provides high impact and scalable business solutions.

Original. Reposted with permission.

Related:


Sign Up

By subscribing you accept KDnuggets Privacy Policy