- An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM) - Jun 16, 2021.
Understanding why your AI-based models make the decisions they do is crucial for deploying practical solutions in the real-world. Here, we review some techniques in the field of Explainable AI (XAI), why explainability is important, example models of explainable AI using LIME and SHAP, and demonstrate how Explainable Boosting Machines (EBMs) can make explainability even easier.
AI, Deep Learning, Explainability, Gradient Boosting, Interpretability, LIME, Machine Learning, SHAP
- Explaining Black Box Models: Ensemble and Deep Learning Using LIME and SHAP - Jan 21, 2020.
This article will demonstrate explainability on the decisions made by LightGBM and Keras models in classifying a transaction for fraudulence, using two state of the art open source explainability techniques, LIME and SHAP.
Deep Learning, Ensemble Methods, Explainability, LIME, SHAP
- Interpretability part 3: opening the black box with LIME and SHAP - Dec 19, 2019.
The third part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers methods that try to explain each prediction instead of establishing a global explanation.
Explainability, Interpretability, LIME, SHAP
- Python Libraries for Interpretable Machine Learning - Sep 4, 2019.
In the following post, I am going to give a brief guide to four of the most established packages for interpreting and explaining machine learning models.
Bias, Interpretability, LIME, Machine Learning, Python, SHAP
- Opening Black Boxes: How to leverage Explainable Machine Learning - Aug 1, 2019.
A machine learning model that predicts some outcome provides value. One that explains why it made the prediction creates even more value for your stakeholders. Learn how Interpretable and Explainable ML technologies can help while developing your model.
Explainable AI, Feature Selection, LIME, Machine Learning, SHAP, XAI
- “Please, explain.” Interpretability of machine learning models - May 9, 2019.
Unveiling secrets of black box models is no longer a novelty but a new business requirement and we explain why using several different use cases.
Bias, Explainable AI, Interpretability, LIME, Machine Learning, SHAP, XAI
- An introduction to explainable AI, and why we need it - Apr 15, 2019.
We introduce explainable AI, why it is needed, and present the Reversed Time Attention Model, Local Interpretable Model-Agnostic Explanation and Layer-wise Relevance Propagation.
AI, Explainable AI, LIME, Machine Learning, XAI
- Explainable AI or Halting Faulty Models ahead of Disaster - Mar 27, 2019.
A brief overview of a new method for explainable AI (XAI), called anchors, introduce its open-source implementation and show how to use it to explain models predicting the survival of Titanic passengers.
AI, Explainable AI, Kaggle, LIME, Titanic, XAI
- How to solve 90% of NLP problems: a step-by-step guide - Jan 14, 2019.
Read this insightful, step-by-step article on how to use machine learning to understand and leverage text.
LIME, NLP, Text Analytics, Text Classification, Word Embeddings, word2vec
- Explainable Artificial Intelligence - Jan 10, 2019.
We outline the necessity of explainable AI, discuss some of the methods in academia, take a look at explainability vs accuracy, investigate use cases, and more.
AI, Explainable AI, LIME, XAI
- Four Approaches to Explaining AI and Machine Learning - Dec 12, 2018.
We discuss several explainability techniques being championed today, including LOCO (leave one column out), permutation impact, and LIME (local interpretable model-agnostic explanations).
AI, Explainable AI, Interpretability, LIME, Machine Learning
- Explainable Artificial Intelligence (Part 2) – Model Interpretation Strategies - Dec 6, 2018.
The aim of this article is to give you a good understanding of existing, traditional model interpretation methods, their limitations and challenges. We will also cover the classic model accuracy vs. model interpretability trade-off and finally take a look at the major strategies for model interpretation.
Pages: 1 2
Explainable AI, Interpretability, LIME, Machine Learning, SHAP
- Holy Grail of AI for Enterprise — Explainable AI - Oct 19, 2018.
Explainable AI (XAI) is an emerging branch of AI where AI systems are made to explain the reasoning behind every decision made by them. We investigate some of its key benefits and design principles.
AI, Enterprise, Explainable AI, LIME
- Introduction to Local Interpretable Model-Agnostic Explanations (LIME) - Aug 25, 2016.
Learn about LIME, a technique to explain the predictions of any machine learning classifier.
Algorithms, Classifier, Explanation, Interpretability, LIME, Machine Learning, Prediction