-
Vega-Lite: A grammar of interactive graphics
Vega and Vega-lite follow in a long line of work that can trace its roots back to Wilkinson’s ‘The Grammar of Graphics.’ Since then VegaLite has come into existence, bringing high-level specification of interactive visualisations to the Vega-Lite world.
-
Task-based effectiveness of basic visualizations
This is a summary of a recent paper on an age-old topic: what visualisation should I use? No prizes for guessing “it depends!” Is this the paper to finally settle the age-old debate surrounding pie-charts??
-
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
The two main takeaways from this paper: firstly, a sharpening of my understanding of the difference between explainability and interpretability, and why the former may be problematic; and secondly some great pointers to techniques for creating truly interpretable models.
-
Beyond news contents: the role of social context for fake news detection
Today we’re looking at a more general fake news problem: detecting fake news that is being spread on a social network. This is a summary of a recent paper which demonstrates why we should also look at the social context: the publishers and the users spreading the information!
-
TensorFlow.js: Machine learning for the web and beyond
TensorFlow.js brings TensorFlow and Keras to the the JavaScript ecosystem, supporting both Node.js and browser-based applications. Read a summary of the paper which describes the design, API, and implementation of TensorFlow.js.
-
A comprehensive survey on graph neural networks
This article summarizes a paper which presents us with a broad sweep of the graph neural network landscape. It’s a survey paper, so you’ll find details on the key approaches and representative papers, as well as information on commonly used datasets and benchmark performance on them.
-
Deep learning scaling is predictable, empirically
This study starts with a simple question: “how can we improve the state of the art in deep learning?”
-
Understanding Deep Learning Requires Re-thinking Generalization
What is it that distinguishes neural networks that generalize well from those that don’t? A satisfying answer to this question would not only help to make neural networks more interpretable, but it might also lead to more principled and reliable model architecture design.
-
Learning to Learn by Gradient Descent by Gradient Descent
What if instead of hand designing an optimising algorithm (function) we learn it instead? That way, by training on the class of problems we’re interested in solving, we can learn an optimum optimiser for the class!
-
Artificial Intelligence and Life in 2030
Read this engaging overview of a report from the Stanford University 100 year study of Artificial Intelligence, “a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society.”
|