- A Deep Learning Dream: Accuracy and Interpretability in a Single Model - Sep 7, 2020.
IBM Research believes that you can improve the accuracy of interpretable models with knowledge learned in pre-trained models.
- Explainable and Reproducible Machine Learning Model Development with DALEX and Neptune - Aug 27, 2020.
With ML models serving real people, misclassified cases (which are a natural consequence of using ML) are affecting peoples’ lives and sometimes treating them very unfairly. It makes the ability to explain your models’ predictions a requirement rather than just a nice to have.
- Understanding How Neural Networks Think - Jul 16, 2020.
A couple of years ago, Google published one of the most seminal papers in machine learning interpretability.
- modelStudio and The Grammar of Interactive Explanatory Model Analysis - Jun 19, 2020.
modelStudio is an R package that automates the exploration of ML models and allows for interactive examination. It works in a model agnostic fashion, therefore is compatible with most of the ML frameworks.
- Nitpicking Machine Learning Technical Debt - Jun 8, 2020.
Technical Debt in software development is pervasive. With machine learning engineering maturing, this classic trouble is unsurprisingly rearing its ugly head. These 25 best practices, first described in 2015 and promptly overshadowed by shiny new ML techniques, are updated for 2020 and ready for you to follow -- and lead the way to better ML code and processes in your organization.
Pages: 1 2
- Explaining “Blackbox” Machine Learning Models: Practical Application of SHAP - May 6, 2020.
Train a "blackbox" GBM model on a real dataset and make it explainable with SHAP.
- A simple and interpretable performance measure for a binary classifier - Mar 4, 2020.
Binary classification tasks are the bread and butter of machine learning. However, the standard statistic for its performance is a mathematical tool that is difficult to interpret -- the ROC-AUC. Here, a performance measure is introduced that simply considers the probability of making a correct binary classification.
- Do You Trust and Understand Your Predictive Models? - Feb 4, 2020.
To help practitioners make the most of recent and disruptive breakthroughs in debugging, explainability, fairness, and interpretability techniques for machine learning read “An Introduction to Machine Learning Intrepretability Second Edition”. Download this report now.
- A bird’s-eye view of modern AI from NeurIPS 2019 - Jan 28, 2020.
With the explosion of the field of AI/ML impacting so many applications and industries, there is great value coming out of recent progress. This review highlights many research areas covered at the NeurIPS 2019 conference recently held in Vancouver, Canada, and features many important areas of progress we expect to see in the coming year.
- Uber Has Been Quietly Assembling One of the Most Impressive Open Source Deep Learning Stacks in the Market - Jan 27, 2020.
Many of the technologies used by Uber teams have been open sourced and received accolades from the machine learning community. Let’s look at some of my favorites.
- Interpretability part 3: opening the black box with LIME and SHAP - Dec 19, 2019.
The third part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers methods that try to explain each prediction instead of establishing a global explanation.
- Interpretability: Cracking open the black box, Part 2 - Dec 11, 2019.
The second part in a series on leveraging techniques to take a look inside the black box of AI, this guide considers post-hoc interpretation that is useful when the model is not transparent.
- 10 Free Top Notch Machine Learning Courses - Dec 6, 2019.
Are you interested in studying machine learning over the holidays? This collection of 10 free top notch courses will allow you to do just that, with something for every approach to improving your machine learning skills.
- Explainability: Cracking open the black box, Part 1 - Dec 4, 2019.
What is Explainability in AI and how can we leverage different techniques to open the black box of AI and peek inside? This practical guide offers a review and critique of the various techniques of interpretability.
- Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead - Nov 20, 2019.
The two main takeaways from this paper: firstly, a sharpening of my understanding of the difference between explainability and interpretability, and why the former may be problematic; and secondly some great pointers to techniques for creating truly interpretable models.
- Choosing a Machine Learning Model - Oct 14, 2019.
Selecting the perfect machine learning model is part art and part science. Learn how to review multiple models and pick the best in both competitive and real-world applications.
- KDnuggets™ News 19:n34, Sep 11: I wasn’t getting hired as a Data Scientist. So I sought data on who is - Sep 11, 2019.
How one person overcame rejections applying to Data Scientist positions by getting actual data on who is getting hired; Advice from Andrew Ng on building ML career and reading research papers; 10 Great Python resources for Data Scientists; Python Libraries for Interpretable ML.
- Python Libraries for Interpretable Machine Learning - Sep 4, 2019.
In the following post, I am going to give a brief guide to four of the most established packages for interpreting and explaining machine learning models.
- A Data Science Playbook for explainable ML/xAI - Jul 30, 2019.
This technical webinar on Aug 14 discusses traditional and modern approaches for interpreting black box models. Additionally, we will review cutting edge research coming out of UCSF, CMU, and industry.
- This New Google Technique Help Us Understand How Neural Networks are Thinking - Jul 24, 2019.
Recently, researchers from the Google Brain team published a paper proposing a new method called Concept Activation Vectors (CAVs) that takes a new angle to the interpretability of deep learning models.
- KDnuggets™ News 19:n19, May 15: Data Scientist – Best Job of the Year!; How (not) to use Machine Learning for time series forecasting - May 15, 2019.
"Please, explain." Interpretability of machine learning models; How to fix an Unbalanced Dataset; Data Science Poem; Customer Churn Prediction Using Machine Learning; A Complete Exploratory Data Analysis and Visualization for Text
- “Please, explain.” Interpretability of machine learning models - May 9, 2019.
Unveiling secrets of black box models is no longer a novelty but a new business requirement and we explain why using several different use cases.
- Are BERT Features InterBERTible? - Feb 19, 2019.
This is a short analysis of the interpretability of BERT contextual word representations. Does BERT learn a semantic vector representation like Word2Vec?
- Artificial Intelligence and Data Science Advances in 2018 and Trends for 2019 - Feb 18, 2019.
We recap some of the major highlights in data science and AI throughout 2018, before looking at the some of the potential newest trends and technological advances for the year ahead.
Pages: 1 2
- The year in AI/Machine Learning advances: Xavier Amatriain 2018 Roundup - Jan 11, 2019.
A summary of the main machine learning advances from 2018, including AI hype cooling down, interpretability, deep learning, NLP, and more.
- A Case For Explainable AI & Machine Learning - Dec 27, 2018.
In support of the explainable AI cause, we present a variety of use cases covering operational needs, regulatory compliance and public trust and social acceptance.
- Machine Learning Explainability vs Interpretability: Two concepts that could help restore trust in AI - Dec 20, 2018.
We explain the key differences between explainability and interpretability and why they're so important for machine learning and AI, before taking a look at several techniques and methods for improving machine learning interpretability.
- Four Approaches to Explaining AI and Machine Learning - Dec 12, 2018.
We discuss several explainability techniques being championed today, including LOCO (leave one column out), permutation impact, and LIME (local interpretable model-agnostic explanations).
- Explainable Artificial Intelligence (Part 2) – Model Interpretation Strategies - Dec 6, 2018.
The aim of this article is to give you a good understanding of existing, traditional model interpretation methods, their limitations and challenges. We will also cover the classic model accuracy vs. model interpretability trade-off and finally take a look at the major strategies for model interpretation.
Pages: 1 2
- Interpretability is crucial for trusting AI and machine learning - Nov 30, 2018.
We explain what exactly interpretability is and why it is so important, focusing on its use for data scientists, end users and regulators.
- KDnuggets™ News 18:n44, Nov 21: What is the Best Python IDE for Data Science?; Anticipating the next move in data science - Nov 21, 2018.
Also: Mastering The New Generation of Gradient Boosting; Top 10 Python Data Science Libraries; Predictive Analytics in 2018: Salaries & Industry Shifts; Sorry I didn't get that! How to understand what your users want; Best Deals in Deep Learning Cloud Providers: From CPU to GPU to TPU
- Using Uncertainty to Interpret your Model - Nov 16, 2018.
We outline why you should care about uncertainty and discuss the different types, including model, data and measurement uncertainty and what different purposes these all serve.
- Key Takeaways from the Strata San Jose 2018 - Jul 16, 2018.
By dropping 'Hadoop' from its name, the @strataconf 2018 in San Jose signaled the emphasis on machine learning, cloud, streaming and real-time applications.
- 5 Machine Learning Projects You Should Not Overlook, June 2018 - Jun 12, 2018.
Here is a new installment of 5 more machine learning or machine learning-related projects you may not yet have heard of, but may want to consider checking out!
- Human Interpretable Machine Learning (Part 1) — The Need and Importance of Model Interpretation - Jun 6, 2018.
A brief introduction into machine learning model interpretation.
- Interpreting Machine Learning Models: An Overview - Nov 7, 2017.
This post summarizes the contents of a recent O'Reilly article outlining a number of methods for interpreting machine learning models, beyond the usual go-to measures.
- DataScience.com Releases Python Package for Interpreting the Decision-Making Processes of Predictive Models - May 24, 2017.
DataScience.com new Python library, Skater, uses a combination of model interpretation algorithms to identify how models leverage data to make predictions.
- Simplifying Decision Tree Interpretability with Python & Scikit-learn - May 19, 2017.
This post will look at a few different ways of attempting to simplify decision tree representation and, ultimately, interpretability. All code is in Python, with Scikit-learn being used for the decision tree modeling.
- Big Data Desperately Needs Transparency - Mar 6, 2017.
If Big Data is to realize its potential, people need to understand what it is capable of, what information is out there and where every piece of data comes from. Without such transparency and understanding, it will be difficult to persuade people to rely on the findings.
- Measuring Topic Interpretability with Crowdsourcing - Nov 30, 2016.
Topic modelling is an important statistical modelling technique to discover abstract topics in collection of documents. This article talks about a new measure for assessing the semantic properties of statistical topics and how to use it.
- The Deception of Supervised Learning - Sep 13, 2016.
Do models or offline datasets ever really tell us what to do? Most application of supervised learning is predicated on this deception.
- Interpretability over Accuracy - Aug 25, 2016.
If researchers can’t understand a provided answer, it is not viable. They can’t write about techniques they don’t understand beyond “Here are the numbers. Look how pretty my model is.” Good research, that ain’t.
- Introduction to Local Interpretable Model-Agnostic Explanations (LIME) - Aug 25, 2016.
Learn about LIME, a technique to explain the predictions of any machine learning classifier.
- The Myth of Model Interpretability - Apr 27, 2015.
Deep networks are widely regarded as black boxes. But are they truly uninterpretable in any way that logistic regression is not?