- KDnuggets™ News 21:n23, Jun 23: Pandas vs SQL: When Data Scientists Should Use Each Tool; How to Land a Data Analytics Job in 6 Months - Jun 23, 2021.
Pandas vs SQL: When Data Scientists Should Use Each Tool; How to Land a Data Analytics Job in 6 Months; A Graph-based Text Similarity Method with Named Entity Information in NLP; The Best Way to Learn Practical NLP?; An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM)
Analytics, Career Advice, Data Scientist, Explainability, NLP, Pandas, Python, SQL
- An introduction to Explainable AI (XAI) and Explainable Boosting Machines (EBM) - Jun 16, 2021.
Understanding why your AI-based models make the decisions they do is crucial for deploying practical solutions in the real-world. Here, we review some techniques in the field of Explainable AI (XAI), why explainability is important, example models of explainable AI using LIME and SHAP, and demonstrate how Explainable Boosting Machines (EBMs) can make explainability even easier.
AI, Deep Learning, Explainability, Gradient Boosting, Interpretability, LIME, Machine Learning, SHAP
- Machine Learning Model Interpretation - Jun 2, 2021.
Read this overview of using Skater to build machine learning visualizations.
Explainability, Interpretability, Machine Learning, Python
- The Explainable Boosting Machine - May 13, 2021.
As accurate as gradient boosting, as interpretable as linear regression.
Decision Trees, Explainability, Gradient Boosting, Interpretability, Machine Learning
Interpretable Machine Learning: The Free eBook - Apr 9, 2021.
Interested in learning more about interpretability in machine learning? Check out this free eBook to learn about the basics, simple interpretable models, and strategies for interpreting more complex black box models.
AI, Explainability, Explainable AI, Free ebook, Interpretability
Shapash: Making Machine Learning Models Understandable - Apr 2, 2021.
Establishing an expectation for trust around AI technologies may soon become one of the most important skills provided by Data Scientists. Significant research investments are underway in this area, and new tools are being developed, such as Shapash, an open-source Python library that helps Data Scientists make machine learning models more transparent and understandable.
Explainability, Machine Learning, Python, SHAP
- Introduction to the White-Box AI: the Concept of Interpretability - Mar 31, 2021.
ML models interpretability can be seen as “the ability to explain or to present in understandable terms to a human.” Read this article and learn to go beyond the black box of AI, where algorithms make predictions, toward the underlying explanation remains unknown and untraceable.
AI, Explainability, Explainable AI, Sciforce
- Explainable Visual Reasoning: How MIT Builds Neural Networks that can Explain Themselves - Mar 30, 2021.
New MIT research attempts to close the gap between state-of-the-art performance and interpretable models in computer vision tasks.
Explainability, Explainable AI, MIT, Neural Networks
- KDnuggets™ News 21:n06, Feb 10: The Best Data Science Project to Have in Your Portfolio; Deep learning doesn’t need to be a black box - Feb 10, 2021.
The Best Data Science Project to Have in Your Portfolio; Deep learning doesn’t need to be a black box; Build Your First Data Science Application; How to create stunning visualizations using python from scratch; How to Get Your First Job in Data Science without Any Work Experience
Career Advice, Data Science, Data Visualization, Deep Learning, Explainability, Portfolio, Python
- Adversarial Attacks on Explainable AI - Feb 9, 2021.
Are explainability methods black-box themselves?
Adversarial, AI, Explainability, Explainable AI
Deep learning doesn’t need to be a black box - Feb 5, 2021.
The cultural perception of AI is often suspect because of the described challenges in knowing why a deep neural network makes its predictions. So, researchers try to crack open this "black box" after a network is trained to correlate results with inputs. But, what if the goal of explainability could be designed into the network's architecture -- before the model is trained and without reducing its predictive power? Maybe the box could stay open from the beginning.
Convolutional Neural Networks, Deep Learning, Explainability, Explainable AI, Image Recognition
- AI registers: finally, a tool to increase transparency in AI/ML - Dec 9, 2020.
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why they are important and what you need to do, no tools have existed until now.
AI, Bias, Ethics, Explainability, Helsinki, Machine Learning, Trust
- tensorflow + dalex = :) , or how to explain a TensorFlow model - Nov 13, 2020.
Having a machine learning model that generates interesting predictions is one thing. Understanding why it makes these predictions is another. For a tensorflow predictive model, it can be straightforward and convenient develop an explainable AI by leveraging the dalex Python package.
Dalex, Explainability, Explainable AI, Machine Learning, Python, TensorFlow
- Interpretability, Explainability, and Machine Learning – What Data Scientists Need to Know - Nov 4, 2020.
The terms “interpretability,” “explainability” and “black box” are tossed about a lot in the context of machine learning, but what do they really mean, and why do they matter?
Explainability, Explainable AI, Interpretability, Machine Learning
- Explaining the Explainable AI: A 2-Stage Approach - Oct 29, 2020.
Understanding how to build AI models is one thing. Understanding why AI models provide the results they provide is another. Even more so, explaining any type of understanding of AI models to humans is yet another challenging layer that must be addressed if we are to develop a complete approach to Explainable AI.
AI, Explainability, Explainable AI, XAI
- Explainable and Reproducible Machine Learning Model Development with DALEX and Neptune - Aug 27, 2020.
With ML models serving real people, misclassified cases (which are a natural consequence of using ML) are affecting peoples’ lives and sometimes treating them very unfairly. It makes the ability to explain your models’ predictions a requirement rather than just a nice to have.
Dalex, Explainability, Explainable AI, Interpretability, Python, SHAP
- modelStudio and The Grammar of Interactive Explanatory Model Analysis - Jun 19, 2020.
modelStudio is an R package that automates the exploration of ML models and allows for interactive examination. It works in a model agnostic fashion, therefore is compatible with most of the ML frameworks.
Analysis, Explainability, Interpretability, Machine Learning, R
- Nitpicking Machine Learning Technical Debt - Jun 8, 2020.
Technical Debt in software development is pervasive. With machine learning engineering maturing, this classic trouble is unsurprisingly rearing its ugly head. These 25 best practices, first described in 2015 and promptly overshadowed by shiny new ML techniques, are updated for 2020 and ready for you to follow -- and lead the way to better ML code and processes in your organization.
Pages: 1 2
Best Practices, DevOps, Explainability, Interpretability, Machine Learning, Monitoring, Pipeline, Technical Debt, Version Control
- Evidence Counterfactuals for explaining predictive models on Big Data - May 18, 2020.
Big Data generated by people -- such as, social media posts, mobile phone GPS locations, and browsing history -- provide enormous prediction value for AI systems. However, explaining how these models predict with the data remains challenging. This interesting explanation approach considers how a model would behave if it didn't have the original set of data to work with.
Big Data, Explainability, Predictive Modeling, Predictive Models, Statistics
- KDnuggets™ News 20:n19, May 13: Start Your Machine Learning Career in Quarantine; Will Machine Learning Engineers Exist in 10 Years? - May 13, 2020.
Also: The Elements of Statistical Learning: The Free eBook; Explaining "Blackbox" Machine Learning Models: Practical Application of SHAP; What You Need to Know About Deep Reinforcement Learning; 5 Concepts You Should Know About Gradient Descent and Cost Function; Hyperparameter Optimization for Machine Learning Models
Book, Career, Deep Learning, Explainability, Free ebook, Machine Learning, Machine Learning Engineer, Reinforcement Learning, SHAP
- Explaining “Blackbox” Machine Learning Models: Practical Application of SHAP - May 6, 2020.
Train a "blackbox" GBM model on a real dataset and make it explainable with SHAP.
Explainability, Interpretability, Python, SHAP

20 AI, Data Science, Machine Learning Terms You Need to Know in 2020 (Part 2) - Mar 2, 2020.
We explain important AI, ML, Data Science terms you should know in 2020, including Double Descent, Ethics in AI, Explainability (Explainable AI), Full Stack Data Science, Geospatial, GPT-2, NLG (Natural Language Generation), PyTorch, Reinforcement Learning, and Transformer Architecture.
AI, Data Science, Explainability, Geospatial, GPT-2, Key Terms, Machine Learning, Natural Language Generation, Reinforcement Learning, Transformer
- Observability for Data Engineering - Feb 10, 2020.
Going beyond traditional monitoring techniques and goals, understanding if a system is working as intended requires a new concept in DevOps, called Observability. Learn more about this essential approach to bring more context to your system metrics.
Data Engineering, DevOps, Explainability, KPI, Monitoring, Time Series
- Do You Trust and Understand Your Predictive Models? - Feb 4, 2020.
To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning read “An Introduction to Machine Learning Intrepretability Second Edition”. Download this report now.
ebook, Explainability, H2O, Interpretability, O'Reilly, Prediction, Trust
- Explaining Black Box Models: Ensemble and Deep Learning Using LIME and SHAP - Jan 21, 2020.
This article will demonstrate explainability on the decisions made by LightGBM and Keras models in classifying a transaction for fraudulence, using two state of the art open source explainability techniques, LIME and SHAP.
Deep Learning, Ensemble Methods, Explainability, LIME, SHAP
- Introducing Generalized Integrated Gradients (GIG): A Practical Method for Explaining Diverse Ensemble Machine Learning Models - Jan 7, 2020.
There is a need for a new way to explain complex, ensembled ML models for high-stakes applications such as credit and lending. This is why we invented GIG.
Ensemble Methods, Explainability, Machine Learning
Google’s New Explainable AI Service - Dec 20, 2019.
Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.
AI, Explainability, Explainable AI, Google
- Interpretability part 3: opening the black box with LIME and SHAP - Dec 19, 2019.
The third part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers methods that try to explain each prediction instead of establishing a global explanation.
Explainability, Interpretability, LIME, SHAP
- Interpretability: Cracking open the black box, Part 2 - Dec 11, 2019.
The second part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers post-hoc interpretation that is useful when the model is not transparent.
Explainability, Explainable AI, Feature Selection, Interpretability, Python
10 Free Top Notch Machine Learning Courses - Dec 6, 2019.
Are you interested in studying machine learning over the holidays? This collection of 10 free top notch courses will allow you to do just that, with something for every approach to improving your machine learning skills.
Books, Computer Vision, Courses, Deep Learning, Explainability, Graph Analytics, Interpretability, Machine Learning, NLP, Python
- Explainability: Cracking open the black box, Part 1 - Dec 4, 2019.
What is Explainability in AI and how can we leverage different techniques to open the black box of AI and peek inside? This practical guide offers a review and critique of the various techniques of interpretability.
Explainability, Explainable AI, Interpretability, XAI
- Why the ‘why way’ is the right way to restoring trust in AI - Oct 8, 2019.
As so many more organizations now rely on AI to deliver services and consumer experiences, establishing a public trust in the AI is crucial as these systems begin to make harder decisions that impact customers.
AI, Explainability, GDPR, Trust, XAI
- Beyond Explainability: A Practical Guide to Managing Risks in Machine Learning Models - Sep 20, 2019.
This white paper provides the first-ever standard for managing risk in AI and ML, focusing on both practical processes and technical best practices “beyond explainability” alone. Download now.
Explainability, Immuta, Machine Learning, Privacy, Risks, White Paper
- Introducing AI Explainability 360: A New Toolkit to Help You Understand what Machine Learning Models are Doing - Aug 27, 2019.
Recently, AI researchers from IBM open sourced AI Explainability 360, a new toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.
AI, Explainability, Machine Learning, Modeling
- Two Major Difficulties in AI and One Applied Solution - Feb 22, 2019.
Some of AI’s biggest problems can be solved by focusing on modelling our own human abilities instead of admiring NN and ML “intelligence”. We present an example that takes us in that direction in the form of chess.
AI, Chess, Explainability