- Is Your Model Overtained? - Apr 14, 2021.
WeightWatcher is based on theoretical research (done injoint with UC Berkeley) into Why Deep Learning Works, based on our Theory of Heavy Tailed Self-Regularization (HT-SR). It uses ideas from Random Matrix Theory (RMT), Statistical Mechanics, and Strongly Correlated Systems.
- Is It Too Late to Learn AI? - Mar 9, 2021.
Have you missed the train on learning AI?
- IBM Uses Continual Learning to Avoid The Amnesia Problem in Neural Networks - Feb 15, 2021.
Using continual learning might avoid the famous catastrophic forgetting problem in neural networks.
- Breaking Privacy in Federated Learning - Aug 26, 2020.
Despite the benefits of federated learning, there are still ways of breaching a user’s privacy, even without sharing private data. In this article, we’ll review some research papers that discuss how federated learning includes this vulnerability.
- Learning by Forgetting: Deep Neural Networks and the Jennifer Aniston Neuron - Jun 25, 2020.
DeepMind’s research shows how to understand the role of individual neurons in a neural network.
- Federated Learning: An Introduction - Apr 15, 2020.
Improving machine learning models and making them more secure by training on decentralized data.
- Few-Shot Image Classification with Meta-Learning - Mar 12, 2020.
Here is how you can teach your model to learn quickly from a few examples.
- Amazon Uses Self-Learning to Teach Alexa to Correct its Own Mistakes - Feb 10, 2020.
The digital assistant incorporates a reformulation engine that can learn to correct responses in real time based on customer interactions.
- The ravages of concept drift in stream learning applications and how to deal with it - Dec 18, 2019.
Stream data processing has gained progressive momentum with the arriving of new stream applications and big data scenarios. These streams of data evolve generally over time and may be occasionally affected by a change (concept drift). How to handle this change by using detection and adaptation mechanisms is crucial in many real-world systems.
- Probability Learning: Naive Bayes - Nov 26, 2019.
This post will describe various simplifications of Bayes' Theorem, that make it more practical and applicable to real world problems: these simplifications are known by the name of Naive Bayes. Also, to clarify everything we will see a very illustrative example of how Naive Bayes can be applied for classification.
- Live Webinar: Continual Learning with Human-in-the-loop - Nov 18, 2019.
Join this live webinar from cnvrg, Continual Learning with Human-in-the-loop, Nov 26 @ 12 PM EST, and learn the role of human-in-the-loop in your ML pipeline, how to close the loop in your pipeline, and much more.
- Probability Learning: Maximum Likelihood - Nov 5, 2019.
The maths behind Bayes will be better understood if we first cover the theory and maths underlying another fundamental method of probabilistic machine learning: Maximum Likelihood. This post will be dedicated to explaining it.
- A Concise Explanation of Learning Algorithms with the Mitchell Paradigm - Oct 5, 2018.
A single quote from Tom Mitchell can shed light on both the abstract concept and concrete implementations of machine learning algorithms.