-
The Hard Problems AI Can’t (Yet) Touch
It's tempting to consider the progress of AI as though it were a single monolithic entity,
advancing towards human intelligence on all fronts. But today's machine learning only addresses problems with simple, easily quantified objectives
-
Are Deep Neural Networks Creative?
Deep neural networks routinely generate images and synthesize text. But does this amount to creativity? Can we reasonably claim that deep learning produces art?
-
The ICLR Experiment: Deep Learning Pioneers Take on Scientific Publishing
Deep learning pioneers Yann LeCun and Yoshua Bengio have undertaken a grand experiment in academic publishing. Embracing a radical level of transparency and unprecedented public participation, they've created an opportunity not only to find and vet the best papers, but also to gather data about the publication process itself.
-
Deep Learning Transcends the Bag of Words
Generative RNNs are now widely popular, many modeling text at the character level and typically using unsupervised approach. Here we show how to generate contextually relevant sentences and explain recent work that does it successfully.
-
MetaMind Mastermind Richard Socher: Uncut Interview
In a wide-ranging interview, Richard Socher opens up about MetaMind, deep learning, the nature of corporate research, and the future of machine learning.
-
Does Deep Learning Come from the Devil?
Deep learning has revolutionized computer vision and natural language processing. Yet the mathematics explaining its success remains elusive. At the Yandex conference on machine learning prospects and applications, Vladimir Vapnik offered a critical perspective.
-
Recycling Deep Learning Models with Transfer Learning
Deep learning exploits gigantic datasets to produce powerful models. But what can we do when our datasets are comparatively small? Transfer learning by fine-tuning deep nets offers a way to leverage existing datasets to perform well on new tasks.
-
Deep Learning and the Triumph of Empiricism
Theoretical guarantees are clearly desirable. And yet many of today's best-performing supervised learning algorithms offer none. What explains the gap between theoretical soundness and empirical success?
-
The Myth of Model Interpretability
Deep networks are widely regarded as black boxes. But are they truly uninterpretable in any way that logistic regression is not?
-
Do We Need More Training Data or More Complex Models?
Do we need more training data? Which models will suffer from performance saturation as data grows large? Do we need larger models or more complicated models, and what is the difference?
|