- 8 Best Data Science Courses to Enroll in 2022 For Steep Career Advancement - Feb 3, 2022.
Here is the list of the top 8 data science courses and programs that you can consider for upskilling yourself and get the best data scientist job in 2022.
Learning
- Free 4 Week Data Science Course on AI Quality Management - Feb 2, 2022.
Are you interested in learning more about how to analyze and improve the performance and trustworthiness of your machine learning models? Then this 4-week, live online course is for you.
Learning
- Meta-Learning for Keyphrase Extraction - Dec 3, 2021.
This article explores Meta-Learning for Key phrase Extraction, which delves into the how and why of KeyPhrase Extraction (KPE) - extracting phrases/groups of words from a document to best capture and represent its content. The article outline what needs to be done to build a keyphrase extractor that performs well not only on in-domain data, but also in a zero-shot scenario where keyphrases need to be extracted from data that have a different distribution (either a different domain or a different type of documents).
Learning, NLP, Text Analytics
- How My Learning Path Changed After Becoming a Data Scientist - Aug 10, 2021.
I keep learning but in a different way.
Career Advice, Data Science, Data Scientist, Learning
- How to Tell if You Have Trained Your Model with Enough Data - Jul 12, 2021.
WeightWatcher is an open-source, diagnostic tool for evaluating the performance of (pre)-trained and fine-tuned Deep Neural Networks. It is based on state-of-the-art research into Why Deep Learning Works.
Learning, Neural Networks, Python, Training
How to Determine if Your Machine Learning Model is Overtrained - May 20, 2021.
WeightWatcher is based on theoretical research (done injoint with UC Berkeley) into Why Deep Learning Works, based on our Theory of Heavy Tailed Self-Regularization (HT-SR). It uses ideas from Random Matrix Theory (RMT), Statistical Mechanics, and Strongly Correlated Systems.
Learning, Modeling, Python, Training
- Is Your Model Overtained? - Apr 14, 2021.
WeightWatcher is based on theoretical research (done injoint with UC Berkeley) into Why Deep Learning Works, based on our Theory of Heavy Tailed Self-Regularization (HT-SR). It uses ideas from Random Matrix Theory (RMT), Statistical Mechanics, and Strongly Correlated Systems.
Learning, Modeling, Python, Training
- Is It Too Late to Learn AI? - Mar 9, 2021.
Have you missed the train on learning AI?
AI, Career Advice, Learning
- IBM Uses Continual Learning to Avoid The Amnesia Problem in Neural Networks - Feb 15, 2021.
Using continual learning might avoid the famous catastrophic forgetting problem in neural networks.
IBM, Learning, Neural Networks, Training
- Breaking Privacy in Federated Learning - Aug 26, 2020.
Despite the benefits of federated learning, there are still ways of breaching a user’s privacy, even without sharing private data. In this article, we’ll review some research papers that discuss how federated learning includes this vulnerability.
Anonymized, Federated Learning, Learning, Privacy
Learning by Forgetting: Deep Neural Networks and the Jennifer Aniston Neuron - Jun 25, 2020.
DeepMind’s research shows how to understand the role of individual neurons in a neural network.
Deep Learning, DeepMind, Learning, Neural Networks
- Federated Learning: An Introduction - Apr 15, 2020.
Improving machine learning models and making them more secure by training on decentralized data.
Federated Learning, Learning, Machine Learning, Privacy, Security
- Few-Shot Image Classification with Meta-Learning - Mar 12, 2020.
Here is how you can teach your model to learn quickly from a few examples.
Image Classification, Learning, Machine Learning
- Amazon Uses Self-Learning to Teach Alexa to Correct its Own Mistakes - Feb 10, 2020.
The digital assistant incorporates a reformulation engine that can learn to correct responses in real time based on customer interactions.
Alexa, Amazon, Learning
- The ravages of concept drift in stream learning applications and how to deal with it - Dec 18, 2019.
Stream data processing has gained progressive momentum with the arriving of new stream applications and big data scenarios. These streams of data evolve generally over time and may be occasionally affected by a change (concept drift). How to handle this change by using detection and adaptation mechanisms is crucial in many real-world systems.
IoT, Learning, Real-time
- Probability Learning: Naive Bayes - Nov 26, 2019.
This post will describe various simplifications of Bayes' Theorem, that make it more practical and applicable to real world problems: these simplifications are known by the name of Naive Bayes. Also, to clarify everything we will see a very illustrative example of how Naive Bayes can be applied for classification.
Bayes Theorem, Learning, Naive Bayes, Probability
- Live Webinar: Continual Learning with Human-in-the-loop - Nov 18, 2019.
Join this live webinar from cnvrg, Continual Learning with Human-in-the-loop, Nov 26 @ 12 PM EST, and learn the role of human-in-the-loop in your ML pipeline, how to close the loop in your pipeline, and much more.
cnvrg.io, Humans, Learning, Machine Intelligence
- Probability Learning: Maximum Likelihood - Nov 5, 2019.
The maths behind Bayes will be better understood if we first cover the theory and maths underlying another fundamental method of probabilistic machine learning: Maximum Likelihood. This post will be dedicated to explaining it.
Learning, Probability, Statistics
- A Concise Explanation of Learning Algorithms with the Mitchell Paradigm - Oct 5, 2018.
A single quote from Tom Mitchell can shed light on both the abstract concept and concrete implementations of machine learning algorithms.
Algorithms, Learning, Machine Learning, Tom Mitchell