modelStudio and The Grammar of Interactive Explanatory Model Analysis

modelStudio is an R package that automates the exploration of ML models and allows for interactive examination. It works in a model agnostic fashion, therefore is compatible with most of the ML frameworks.



By Przemyslaw Biecek, Model Oriented

The new version of modelStudio has recently been released on CRAN.

modelStudio is an R package that automates the exploration of ML models and allows for interactive examination. It works in a model agnostic fashion, therefore is compatible with most of the ML frameworks (e.g. mlr/mlr3, xgboost, caret, h2o, scikit-learn, lightGBM, keras/tensorflow).

Recently, we have uploaded to arXiv an article presenting the main principles behind this tool: The Grammar of Interactive Explanatory Model Analysis. Here are the highlights.

Figure

The first generation of model explanations aims at exploring individual aspects of a model behaviour. The second generation of model explanation aims at integration of individual aspects into a vibrant and multi-threaded customisable story about the model that address the needs of different stakeholders.

 

Local and global level model explanations complement each other. There is an increasing number of voices arguing that a single method of model exploration cannot fit all needs of different stakeholders (see e.g. Arya et al 2019 or Sokol et al 2020). In this article we show how common XAI methods can be combined into larger blocks that complement each other. Such constructs address a wider range of user needs. In the picture below we show how such a juxtaposition of aspects shows the model from different perspectives and allows to better understand how it behaves.

 

As in the story of the blind and the elephant, we cannot sufficiently explain a complex model using a single method that gives only one perspective.

Isolated explanations are prone to misunderstanding, which inevitably leads to wrong reasoning. Without multi-faceted interactive explanation, there will be no understanding nor trust for models.

Figure

The Grammar of Interactive Model Explanatory Analysis. It shows how the various methods of model exploration enrich each other. Names of popular techniques are listed in cells. Columns and rows span the taxonomy. Edges in this graph indicate which method can be complemented by which.

 

Explanation of predictive models is a process not a chart. We also argue that each explanation raises new questions. So a good XAI system should allow for interactive exploration of different aspects of the model. To make this possible, we introduce a taxonomy of explanations and propose the grammar that generates the process of exploring the complex model.

modelStudio implements the principles of IEMA. The modelStudio framework was created to allow such iterative exploration with a quick feedback loop as model debugging is often demanding and laborious.

Figure

Example exploration of a gbm model that predict player’s worth based on FIFA dataset. Find more at https://pbiecek.github.io/explainFIFA20/

 

The topic of eXplainable Artificial Intelligence brings much attention recently. However, the literature is dominated by works either focused on a list of requirements for its better adoption or contributions with a very technical approach to explaining only a single aspect of the model. In the paper, we propose a third way. First, we argue that explaining a single aspect of the model is incomplete. Second, we propose a taxonomy of methods for explanations, which focuses on the needs of different stakeholders apparent in the lifecycle of Machine Learning models. Third, we describe that Interactive XAI is a process in which explanations are related to a sequence of analysis of complementary model aspects.

This is certainly only a single step towards a better understanding of the model’s exploration process. If you have any suggestions and comments about this process, we will be happy to hear them.

Thanks to Hubert Baniecki.

 
Bio: Przemyslaw Biecek is interested in innovations in predictive modeling. Posts about eXplainable AI, IML, AutoML, AutoEDA and Evidence-Based Machine Learning. Part of r-bloggers.com.

Original. Reposted with permission.

Related: