- Explainable Forecasting and Nowcasting with State-of-the-art Deep Neural Networks and Dynamic Factor Model - Dec 27, 2021.
Review this detailed tutorial with code and revisit the decades-long old problem using a democratized and interpretable AI framework of how precisely can we anticipate the future and understand its causal factors?
- 11 Most Practical Data Science Skills for 2022 - Oct 19, 2021.
While the field of data science continues to evolve with exciting new progress in analytical approaches and machine learning, there remain a core set of skills that are foundational for all general practitioners and specialists, especially those who want to be employable with full-stack capabilities.
- What Makes AI Trustworthy? - May 11, 2021.
This blog pertains to the importance of why AI needs to be trustworthy as well as what makes it trustworthy. AI predictions/suggestions should not just be taken at face value, but rather delved into at a deeper level. We need to understand how an AI system makes its predictions to put our trust in it. Trust should not be built on prediction accuracy alone.
- Interpretable Machine Learning: The Free eBook - Apr 9, 2021.
Interested in learning more about interpretability in machine learning? Check out this free eBook to learn about the basics, simple interpretable models, and strategies for interpreting more complex black box models.
- KDnuggets™ News 21:n13, Apr 7: Top 10 Python Libraries Data Scientists should know in 2021; KDnuggets Top Blogs Reward Program; Making Machine Learning Models Understandable - Apr 7, 2021.
Top 10 Python Libraries Data Scientists should know in 2021; KDnuggets Top Blogs Reward Program; Shapash: Making Machine Learning Models Understandable; Easy AutoML in Python; The 8 Most Common Data Scientists; A/B Testing: 7 Common Questions and Answers in Data Science Interviews, Part 1
- Introduction to the White-Box AI: the Concept of Interpretability - Mar 31, 2021.
ML models interpretability can be seen as “the ability to explain or to present in understandable terms to a human.” Read this article and learn to go beyond the black box of AI, where algorithms make predictions, toward the underlying explanation remains unknown and untraceable.
- Explainable Visual Reasoning: How MIT Builds Neural Networks that can Explain Themselves - Mar 30, 2021.
New MIT research attempts to close the gap between state-of-the-art performance and interpretable models in computer vision tasks.
- Adversarial Attacks on Explainable AI - Feb 9, 2021.
Are explainability methods black-box themselves?
- Deep learning doesn’t need to be a black box - Feb 5, 2021.
The cultural perception of AI is often suspect because of the described challenges in knowing why a deep neural network makes its predictions. So, researchers try to crack open this "black box" after a network is trained to correlate results with inputs. But, what if the goal of explainability could be designed into the network's architecture -- before the model is trained and without reducing its predictive power? Maybe the box could stay open from the beginning.
- Production Machine Learning Monitoring: Outliers, Drift, Explainers & Statistical Performance - Dec 21, 2020.
A practical deep dive on production monitoring architectures for machine learning at scale using real-time metrics, outlier detectors, drift detectors, metrics servers and explainers.
- tensorflow + dalex = :) , or how to explain a TensorFlow model - Nov 13, 2020.
Having a machine learning model that generates interesting predictions is one thing. Understanding why it makes these predictions is another. For a tensorflow predictive model, it can be straightforward and convenient develop an explainable AI by leveraging the dalex Python package.
- Interpretability, Explainability, and Machine Learning – What Data Scientists Need to Know - Nov 4, 2020.
The terms “interpretability,” “explainability” and “black box” are tossed about a lot in the context of machine learning, but what do they really mean, and why do they matter?
- KDnuggets™ News 20:n42, Nov 4: Top Python Libraries for Data Science, Data Visualization & Machine Learning; Mastering Time Series Analysis - Nov 4, 2020.
Top Python Libraries for Data Science, Data Visualization, Machine Learning; Mastering Time Series Analysis with Help From the Experts; Explaining the Explainable AI: A 2-Stage Approach; The Missing Teams For Data Scientists; and more.
- Explaining the Explainable AI: A 2-Stage Approach - Oct 29, 2020.
Understanding how to build AI models is one thing. Understanding why AI models provide the results they provide is another. Even more so, explaining any type of understanding of AI models to humans is yet another challenging layer that must be addressed if we are to develop a complete approach to Explainable AI.
- Explainable and Reproducible Machine Learning Model Development with DALEX and Neptune - Aug 27, 2020.
With ML models serving real people, misclassified cases (which are a natural consequence of using ML) are affecting peoples’ lives and sometimes treating them very unfairly. It makes the ability to explain your models’ predictions a requirement rather than just a nice to have.
- KDnuggets™ News 19:n49, Dec 27: What is a Data Scientist Worth? New Explainable AI from Google - Dec 27, 2019.
What is a Data Scientist Worth?; Google's New Explainable AI Service; The Most In Demand Tech Skills for Data Scientists; The 4 fastest ways NOT to get hired as a data scientist; and KDnuggets Cartoon which was included in a surprising textbook.
- Google’s New Explainable AI Service - Dec 20, 2019.
Google has started offering a new service for “explainable AI” or XAI, as it is fashionably called. Presently offered tools are modest, but the intent is in the right direction.
- Interpretability: Cracking open the black box, Part 2 - Dec 11, 2019.
The second part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers post-hoc interpretation that is useful when the model is not transparent.
- Explainability: Cracking open the black box, Part 1 - Dec 4, 2019.
What is Explainability in AI and how can we leverage different techniques to open the black box of AI and peek inside? This practical guide offers a review and critique of the various techniques of interpretability.
- Opening Black Boxes: How to leverage Explainable Machine Learning - Aug 1, 2019.
A machine learning model that predicts some outcome provides value. One that explains why it made the prediction creates even more value for your stakeholders. Learn how Interpretable and Explainable ML technologies can help while developing your model.
- A Data Science Playbook for explainable ML/xAI - Jul 30, 2019.
This technical webinar on Aug 14 discusses traditional and modern approaches for interpreting black box models. Additionally, we will review cutting edge research coming out of UCSF, CMU, and industry.
- “Please, explain.” Interpretability of machine learning models - May 9, 2019.
Unveiling secrets of black box models is no longer a novelty but a new business requirement and we explain why using several different use cases.
- An introduction to explainable AI, and why we need it - Apr 15, 2019.
We introduce explainable AI, why it is needed, and present the Reversed Time Attention Model, Local Interpretable Model-Agnostic Explanation and Layer-wise Relevance Propagation.
- XAI – A Data Scientist’s Mouthpiece - Apr 1, 2019.
We outline the usefulness of Explainable AI, which allows you to explain the results of a multidimensional model - including having a multimodal decision boundary - to a business user.
- Explainable AI or Halting Faulty Models ahead of Disaster - Mar 27, 2019.
A brief overview of a new method for explainable AI (XAI), called anchors, introduce its open-source implementation and show how to use it to explain models predicting the survival of Titanic passengers.
- The AI Black Box Explanation Problem - Mar 25, 2019.
Introducing Black Box AI, a system for automated decision making often based on machine learning over big data, which maps a user’s features into a class predicting the behavioural traits of the individuals.
- Reinforce AI Conference, March 20-22, Budapest - Feb 12, 2019.
Reinforce is the perfect place to meet other professionals, network with the leaders of the AI industry and have a beer in Budapest, in the heart of Europe. Use code KDNuggets for 20% off.
- Explainable Artificial Intelligence - Jan 10, 2019.
We outline the necessity of explainable AI, discuss some of the methods in academia, take a look at explainability vs accuracy, investigate use cases, and more.
- A Case For Explainable AI & Machine Learning - Dec 27, 2018.
In support of the explainable AI cause, we present a variety of use cases covering operational needs, regulatory compliance and public trust and social acceptance.
- Machine Learning Explainability vs Interpretability: Two concepts that could help restore trust in AI - Dec 20, 2018.
We explain the key differences between explainability and interpretability and why they're so important for machine learning and AI, before taking a look at several techniques and methods for improving machine learning interpretability.
- Four Approaches to Explaining AI and Machine Learning - Dec 12, 2018.
We discuss several explainability techniques being championed today, including LOCO (leave one column out), permutation impact, and LIME (local interpretable model-agnostic explanations).
- Explainable Artificial Intelligence (Part 2) – Model Interpretation Strategies - Dec 6, 2018.
The aim of this article is to give you a good understanding of existing, traditional model interpretation methods, their limitations and challenges. We will also cover the classic model accuracy vs. model interpretability trade-off and finally take a look at the major strategies for model interpretation.
Pages: 1 2
- Interpretability is crucial for trusting AI and machine learning - Nov 30, 2018.
We explain what exactly interpretability is and why it is so important, focusing on its use for data scientists, end users and regulators.
- How Important is that Machine Learning Model be Understandable? We analyze poll results - Nov 19, 2018.
About 85% of respondents said it was always or frequently important that Machine Learning model be understandable. This was is especially important for academic researchers, and surprisingly more in US/Canada than in Europe or Asia.
- Using Uncertainty to Interpret your Model - Nov 16, 2018.
We outline why you should care about uncertainty and discuss the different types, including model, data and measurement uncertainty and what different purposes these all serve.
- New Poll: How Important is Understanding Machine Learning Models? - Oct 30, 2018.
New KDnuggets poll is asking: When building Machine Learning / Data Science models in 2018, how often was it important that the model be humanly understandable/explainable? Please vote
- Holy Grail of AI for Enterprise — Explainable AI - Oct 19, 2018.
Explainable AI (XAI) is an emerging branch of AI where AI systems are made to explain the reasoning behind every decision made by them. We investigate some of its key benefits and design principles.
- The Definitive Guide to AI’s “Black Box” Problem - Oct 17, 2018.
The Amazing, Anti-Jargon, Insight Filled, and Totally Free Handbook to Integrating AI in Highly Regulated Industries - get it now.
- Four Big Data Trends for 2018 - Jan 25, 2018.
Curious about the future of Big Data and AI? Here’s what the trends have it in 2018 for innovations.
- O’Reilly NYC AI Conference Highlights: Explainable AI, Vector Representation, Bias, and Future - Aug 21, 2017.
The answer to questions of trust and bias in AI is largely seen in the focus on Explainable AI. Although traditionally viewed as "black boxes", AI and machine learning systems are not ontologically inscrutable.