- Interpretable Machine Learning: The Free eBook - Apr 9, 2021.
Interested in learning more about interpretability in machine learning? Check out this free eBook to learn about the basics, simple interpretable models, and strategies for interpreting more complex black box models.
- Shapash: Making Machine Learning Models Understandable - Apr 2, 2021.
Establishing an expectation for trust around AI technologies may soon become one of the most important skills provided by Data Scientists. Significant research investments are underway in this area, and new tools are being developed, such as Shapash, an open-source Python library that helps Data Scientists make machine learning models more transparent and understandable.
- Introduction to the White-Box AI: the Concept of Interpretability - Mar 31, 2021.
ML models interpretability can be seen as “the ability to explain or to present in understandable terms to a human.” Read this article and learn to go beyond the black box of AI, where algorithms make predictions, toward the underlying explanation remains unknown and untraceable.
- Explainable Visual Reasoning: How MIT Builds Neural Networks that can Explain Themselves - Mar 30, 2021.
New MIT research attempts to close the gap between state-of-the-art performance and interpretable models in computer vision tasks.
- KDnuggets™ News 21:n06, Feb 10: The Best Data Science Project to Have in Your Portfolio; Deep learning doesn’t need to be a black box - Feb 10, 2021.
The Best Data Science Project to Have in Your Portfolio; Deep learning doesn’t need to be a black box; Build Your First Data Science Application; How to create stunning visualizations using python from scratch; How to Get Your First Job in Data Science without Any Work Experience
- Adversarial Attacks on Explainable AI - Feb 9, 2021.
Are explainability methods black-box themselves?
- Deep learning doesn’t need to be a black box - Feb 5, 2021.
The cultural perception of AI is often suspect because of the described challenges in knowing why a deep neural network makes its predictions. So, researchers try to crack open this "black box" after a network is trained to correlate results with inputs. But, what if the goal of explainability could be designed into the network's architecture -- before the model is trained and without reducing its predictive power? Maybe the box could stay open from the beginning.
- AI registers: finally, a tool to increase transparency in AI/ML - Dec 9, 2020.
Transparency, explainability, and trust are pressing topics in AI/ML today. While much has been written about why they are important and what you need to do, no tools have existed until now.
- tensorflow + dalex = :) , or how to explain a TensorFlow model - Nov 13, 2020.
Having a machine learning model that generates interesting predictions is one thing. Understanding why it makes these predictions is another. For a tensorflow predictive model, it can be straightforward and convenient develop an explainable AI by leveraging the dalex Python package.
- Interpretability, Explainability, and Machine Learning – What Data Scientists Need to Know - Nov 4, 2020.
The terms “interpretability,” “explainability” and “black box” are tossed about a lot in the context of machine learning, but what do they really mean, and why do they matter?
- Top Python Libraries for Data Science, Data Visualization & Machine Learning - Nov 2, 2020.
This article compiles the 38 top Python libraries for data science, data visualization & machine learning, as best determined by KDnuggets staff.
- Explaining the Explainable AI: A 2-Stage Approach - Oct 29, 2020.
Understanding how to build AI models is one thing. Understanding why AI models provide the results they provide is another. Even more so, explaining any type of understanding of AI models to humans is yet another challenging layer that must be addressed if we are to develop a complete approach to Explainable AI.
- Explainable and Reproducible Machine Learning Model Development with DALEX and Neptune - Aug 27, 2020.
With ML models serving real people, misclassified cases (which are a natural consequence of using ML) are affecting peoples’ lives and sometimes treating them very unfairly. It makes the ability to explain your models’ predictions a requirement rather than just a nice to have.
- modelStudio and The Grammar of Interactive Explanatory Model Analysis - Jun 19, 2020.
modelStudio is an R package that automates the exploration of ML models and allows for interactive examination. It works in a model agnostic fashion, therefore is compatible with most of the ML frameworks.
- Nitpicking Machine Learning Technical Debt - Jun 8, 2020.
Technical Debt in software development is pervasive. With machine learning engineering maturing, this classic trouble is unsurprisingly rearing its ugly head. These 25 best practices, first described in 2015 and promptly overshadowed by shiny new ML techniques, are updated for 2020 and ready for you to follow -- and lead the way to better ML code and processes in your organization.
Pages: 1 2
- Evidence Counterfactuals for explaining predictive models on Big Data - May 18, 2020.
Big Data generated by people -- such as, social media posts, mobile phone GPS locations, and browsing history -- provide enormous prediction value for AI systems. However, explaining how these models predict with the data remains challenging. This interesting explanation approach considers how a model would behave if it didn't have the original set of data to work with.
- KDnuggets™ News 20:n19, May 13: Start Your Machine Learning Career in Quarantine; Will Machine Learning Engineers Exist in 10 Years? - May 13, 2020.
Also: The Elements of Statistical Learning: The Free eBook; Explaining "Blackbox" Machine Learning Models: Practical Application of SHAP; What You Need to Know About Deep Reinforcement Learning; 5 Concepts You Should Know About Gradient Descent and Cost Function; Hyperparameter Optimization for Machine Learning Models
- Explaining “Blackbox” Machine Learning Models: Practical Application of SHAP - May 6, 2020.
Train a "blackbox" GBM model on a real dataset and make it explainable with SHAP.
- 20 AI, Data Science, Machine Learning Terms You Need to Know in 2020 (Part 2) - Mar 2, 2020.
We explain important AI, ML, Data Science terms you should know in 2020, including Double Descent, Ethics in AI, Explainability (Explainable AI), Full Stack Data Science, Geospatial, GPT-2, NLG (Natural Language Generation), PyTorch, Reinforcement Learning, and Transformer Architecture.
- Observability for Data Engineering - Feb 10, 2020.
Going beyond traditional monitoring techniques and goals, understanding if a system is working as intended requires a new concept in DevOps, called Observability. Learn more about this essential approach to bring more context to your system metrics.
- Do You Trust and Understand Your Predictive Models? - Feb 4, 2020.
To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning read “An Introduction to Machine Learning Intrepretability Second Edition”. Download this report now.
- Explaining Black Box Models: Ensemble and Deep Learning Using LIME and SHAP - Jan 21, 2020.
This article will demonstrate explainability on the decisions made by LightGBM and Keras models in classifying a transaction for fraudulence, using two state of the art open source explainability techniques, LIME and SHAP.
- Introducing Generalized Integrated Gradients (GIG): A Practical Method for Explaining Diverse Ensemble Machine Learning Models - Jan 7, 2020.
There is a need for a new way to explain complex, ensembled ML models for high-stakes applications such as credit and lending. This is why we invented GIG.
- Google’s New Explainable AI Service - Dec 20, 2019.
Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.
- Interpretability part 3: opening the black box with LIME and SHAP - Dec 19, 2019.
The third part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers methods that try to explain each prediction instead of establishing a global explanation.
- Interpretability: Cracking open the black box, Part 2 - Dec 11, 2019.
The second part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers post-hoc interpretation that is useful when the model is not transparent.
- 10 Free Top Notch Machine Learning Courses - Dec 6, 2019.
Are you interested in studying machine learning over the holidays? This collection of 10 free top notch courses will allow you to do just that, with something for every approach to improving your machine learning skills.
- Explainability: Cracking open the black box, Part 1 - Dec 4, 2019.
What is Explainability in AI and how can we leverage different techniques to open the black box of AI and peek inside? This practical guide offers a review and critique of the various techniques of interpretability.
- Why the ‘why way’ is the right way to restoring trust in AI - Oct 8, 2019.
As so many more organizations now rely on AI to deliver services and consumer experiences, establishing a public trust in the AI is crucial as these systems begin to make harder decisions that impact customers.
- Beyond Explainability: A Practical Guide to Managing Risks in Machine Learning Models - Sep 20, 2019.
This white paper provides the first-ever standard for managing risk in AI and ML, focusing on both practical processes and technical best practices “beyond explainability” alone. Download now.
- Introducing AI Explainability 360: A New Toolkit to Help You Understand what Machine Learning Models are Doing - Aug 27, 2019.
Recently, AI researchers from IBM open sourced AI Explainability 360, a new toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.
- Two Major Difficulties in AI and One Applied Solution - Feb 22, 2019.
Some of AI’s biggest problems can be solved by focusing on modelling our own human abilities instead of admiring NN and ML “intelligence”. We present an example that takes us in that direction in the form of chess.