- Big Data Desperately Needs Transparency - Mar 6, 2017.
If Big Data is to realize its potential, people need to understand what it is capable of, what information is out there and where every piece of data comes from. Without such transparency and understanding, it will be difficult to persuade people to rely on the findings.
- Measuring Topic Interpretability with Crowdsourcing - Nov 30, 2016.
Topic modelling is an important statistical modelling technique to discover abstract topics in collection of documents. This article talks about a new measure for assessing the semantic properties of statistical topics and how to use it.
- The Deception of Supervised Learning - Sep 13, 2016.
Do models or offline datasets ever really tell us what to do? Most application of supervised learning is predicated on this deception.
- Interpretability over Accuracy - Aug 25, 2016.
If researchers can’t understand a provided answer, it is not viable. They can’t write about techniques they don’t understand beyond “Here are the numbers. Look how pretty my model is.” Good research, that ain’t.
- Introduction to Local Interpretable Model-Agnostic Explanations (LIME) - Aug 25, 2016.
Learn about LIME, a technique to explain the predictions of any machine learning classifier.
- The Myth of Model Interpretability - Apr 27, 2015.
Deep networks are widely regarded as black boxes. But are they truly uninterpretable in any way that logistic regression is not?