- Machine Learning & AI Main Developments in 2018 and Key Trends for 2019 - Dec 11, 2018.
As we bid farewell to one year and look to ring in another, KDnuggets has solicited opinions from numerous Machine Learning and AI experts as to the most important developments of 2018 and their 2019 key trend predictions.
- The Foundations of Algorithmic Bias - Nov 16, 2016.
We might hope that algorithmic decision making would be free of biases. But increasingly, the public is starting to realize that machine learning systems can exhibit these same biases and more. In this post, we look at precisely how that happens.
Pages: 1 2 3
- The Deception of Supervised Learning - Sep 13, 2016.
Do models or offline datasets ever really tell us what to do? Most application of supervised learning is predicated on this deception.
- Stop Blaming Terminator for Bad AI Journalism - Aug 11, 2016.
Too often, we blame The Terminator for the public's misconceptions concerning machine learning. But do James Cameron and the Austrian Oak stand wrongfully accused?
- Are Deep Neural Networks Creative? - May 12, 2016.
Deep neural networks routinely generate images and synthesize text. But does this amount to creativity? Can we reasonably claim that deep learning produces art?
- The ICLR Experiment: Deep Learning Pioneers Take on Scientific Publishing - Feb 15, 2016.
Deep learning pioneers Yann LeCun and Yoshua Bengio have undertaken a grand experiment in academic publishing. Embracing a radical level of transparency and unprecedented public participation, they've created an opportunity not only to find and vet the best papers, but also to gather data about the publication process itself.
- TensorFlow is Terrific – A Sober Take on Deep Learning Acceleration - Dec 30, 2015.
TensorFlow does not change the world. But it appears to be the best, most convenient deep learning library out there.
- Deep Learning Transcends the Bag of Words - Dec 7, 2015.
Generative RNNs are now widely popular, many modeling text at the character level and typically using unsupervised approach. Here we show how to generate contextually relevant sentences and explain recent work that does it successfully.
- MetaMind Mastermind Richard Socher: Uncut Interview - Oct 20, 2015.
In a wide-ranging interview, Richard Socher opens up about MetaMind, deep learning, the nature of corporate research, and the future of machine learning.
- Rich Data Summit Takeaways - Oct 19, 2015.
Data scientists get excited about algorithms. But nearly all time spent working with data involves acquiring, pipelining, annotating and cleaning it. At the Rich Data Summit in SF, data's dirty work took center stage.
- Does Deep Learning Come from the Devil? - Oct 9, 2015.
Deep learning has revolutionized computer vision and natural language processing. Yet the mathematics explaining its success remains elusive. At the Yandex conference on machine learning prospects and applications, Vladimir Vapnik offered a critical perspective.
- Recycling Deep Learning Models with Transfer Learning - Aug 14, 2015.
Deep learning exploits gigantic datasets to produce powerful models. But what can we do when our datasets are comparatively small? Transfer learning by fine-tuning deep nets offers a way to leverage existing datasets to perform well on new tasks.
- arXiv.org and the 24 Hour Research Cycle - Jul 21, 2015.
ArXiv.org gives researchers the ability to instantly publish research, free of peer review and the publication cycle. This capability offers both advantages and pitfalls. We should warily eye the 24-7 news cycle as a cautionary tale for how this could go wrong.
- Deep Learning and the Triumph of Empiricism - Jul 7, 2015.
Theoretical guarantees are clearly desirable. And yet many of today's best-performing supervised learning algorithms offer none. What explains the gap between theoretical soundness and empirical success?
- Not So Fast: Questioning Deep Learning IQ Results - Jun 15, 2015.
Did deep learning just leap towards human intelligence? Not so fast.
- Will the Real Data Scientists Please Stand Up? - May 18, 2015.
Job postings for data scientists are everywhere. But what is a data scientist? I present a few archetypes.
- Cloud Machine Learning’s Ostrich Mania & Uncanny Valley - May 14, 2015.
Cloud machine learning services are popping up by the tens, providing automated data science solutions. What will the anticipated customers want? They may follow a peculiar distribution reminiscent of the uncanny valley.
Pages: 1 2
- The Myth of Model Interpretability - Apr 27, 2015.
Deep networks are widely regarded as black boxes. But are they truly uninterpretable in any way that logistic regression is not?
- Cloud Machine Learning Wars: Amazon vs IBM Watson vs Microsoft Azure - Apr 16, 2015.
Amazon recently announced Amazon Machine Learning, a cloud machine learning solution for Amazon Web Services. Able to pull data effortlessly from RDS, S3 and Redshift, the product could pose a significant threat to Microsoft Azure ML and IBM Watson Analytics.
Pages: 1 2
- Gold Mine or Blind Alley? Functional Programming for Big Data & Machine Learning - Apr 1, 2015.
Functional programming is touted as a solution for big data problems. Why is it advantageous? Why might it not be? And who is using it now?
Pages: 1 2
- Do We Need More Training Data or More Complex Models? - Mar 23, 2015.
Do we need more training data? Which models will suffer from performance saturation as data grows large? Do we need larger models or more complicated models, and what is the difference?
- Failing Optimally – Data Science’s Measurement Problem - Mar 4, 2015.
Data science has a measurement problem. Simple metrics may not address complex situations. But complex metrics present myriad problems.
- Data Science’s Most Used, Confused, and Abused Jargon - Feb 10, 2015.
As data science has spread through the mainstream, so too has a dense vocabulary of ill-defined jargon. In a split-personality post, we offer several perspectives on many of data science's most confused terms.
- (Deep Learning’s Deep Flaws)’s Deep Flaws - Jan 26, 2015.
Recent press has challenged the hype surrounding deep learning, trumpeting several findings which expose shortcomings of current algorithms. However, many of deep learning's reported flaws are universal, affecting nearly all machine learning algorithms.
- The High Cost of Maintaining Machine Learning Systems - Jan 21, 2015.
Google researchers warn of the massive ongoing costs for maintaining machine learning systems. We examine how to minimize the technical debt.
- MetaMind Competes with IBM Watson Analytics and Microsoft Azure Machine Learning - Jan 14, 2015.
While Microsoft and IBM rush to bring data science and visualization to the masses, MetaMind follows another path, offering deep learning as a service.
- Differential Privacy: How to make Privacy and Data Mining Compatible - Jan 9, 2015.
Can privacy coexist with machine learning and data mining? Differential privacy allows the learning of general characteristics of populations while guaranteeing the privacy of individual records.
- Stanford’s AI100: Century-Long Study on Effects of Artificial Intelligence on Human Life - Dec 26, 2014.
Stanford unveils new 100 year study on the impact of artificial intelligence, particularly on democracy, privacy, the military. Surprisingly, perspectives from outside the AI community are absent from the initial panel.
- IBM Watson Analytics vs. Microsoft Azure Machine Learning (Part 1) - Dec 16, 2014.
IBM Watson Analytics prototype seeks to abstract away data science, taking ordinary natural language queries and answering them based on the content of uploaded datasets. Microsoft Azure Machine Learning goes the opposite route, streamlining existing data mining methodology for fast results and integration with MS's other cloud services.
- Geoff Hinton AMA: Neural Networks, the Brain, and Machine Learning - Dec 9, 2014.
In a wide-ranging Q&A, Geoff Hinton addresses the future of deep learning, its biological inspirations, and his research philosophy.