Platinum BlogPlatinum BlogAnother 10 Free Must-See Courses for Machine Learning and Data Science

Check out another follow-up collection of free machine learning and data science courses to give you some spring study ideas.



Our previous collections of free machine learning and data science courses was well received... so it's obviously time for another. Here are another 10 courses to help with your spring learning season. Courses range from introductory machine learning to deep learning to natural language processing and beyond.

This collection comes courtesy of Delta Analytics, author and trainer Aurélien Geron, University of Wisconsin–Madison, AI researcher Goku Mohandas, University of Waterloo, National University of Singapore, and University of British Columbia.

If, after reading this list, you find yourself wanting more free quality, curated learning materials, check out the related posts at the bottom. Happy learning!

 
1. Foundations of Machine Learning
Delta Analytics

Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.

 
2. Deep Learning with TensorFlow 2 and Keras
Aurélien Geron

This project accompanies my Deep Learning with TensorFlow 2 and Keras trainings. It contains the exercises and their solutions, in the form of Jupyter notebooks.

WARNING: TensorFlow 2.0 preview may contain bugs and may not behave exactly like the final 2.0 release. Hopefully this code will run fine once TF 2 is out. This is extreme bleeding edge stuff people! :)

 
3. Deep Learning
University of Wisconsin–Madison

The focus of this course will be on understanding artificial neural networks and deep learning algorithmically (discussing the math behind these methods on a basic level) and implementing network models in code as well as applying these to real-world datasets. Some of the topics that will be covered include convolutional neural networks for image classification and object detection, recurrent neural networks for modeling text, and generative adversarial networks for generating new data.

 
4. Practical AI
Goku Mohandas

A practical approach to learning and using machine learning.

Empowering you to use machine learning to get valuable insights from data.

  • Implement basic ML algorithms and deep neural networks with PyTorch.
  • Run everything on the browser without any set up using Google Colab.
  • Learn object-oriented ML to code for products, not just tutorials.

 
5. Deep Unsupervised Learning
University of California, Berkeley

This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data such as natural images, audio waveforms and text corpora. Strides in self-supervised learning have started to close the gap between supervised representation learning and unsupervised representation learning in terms of fine-tuning to unseen tasks. This course will cover the theoretical foundations of these topics as well as their newly enabled applications.

 
6. Introduction to Deep Learning
University of California, Berkeley

This class provides a practical introduction to deep learning, including theoretical motivations and how to implement it in practice. As part of the course we will cover multilayer perceptrons, backpropagation, automatic differentiation, and stochastic gradient descent. Moreover, we introduce convolutional networks for image processing, starting from the simple LeNet to more recent architectures such as ResNet for highly accurate models. Secondly, we discuss sequence models and recurrent networks, such as LSTMs, GRU, and the attention mechanism. Throughout the course we emphasize efficient implementation, optimization and scalability, e.g. to multiple GPUs and to multiple machines. The goal of the course is to provide both a good understanding and good ability to build modern nonparametric estimators.

 
7. Reinforcement Learning
University of Waterloo

The course introduces students to the design of algorithms that enable machines to learn based on reinforcements. In contrast to supervised learning where machines learn from examples that include the correct decision and unsupervised learning where machines discover patterns in the data, reinforcement learning allows machines to learn from partial, implicit and delayed feedback. This is particularly useful in sequential decision making tasks where a machine repeatedly interacts with the environment or users. Applications of reinforcement learning include robotic control, autonomous vehicles, game playing, conversational agents, assistive technologies, computational finance, operations research, etc.

 
8. Deep Learning for NLP
National University of Singapore

This course is taken almost verbatim from CS 224N Deep Learning for Natural Language Processing – Richard Socher’s course at Stanford. We are following their course’s formulation and selection of papers, with the permission of Socher.

 
9. Applied Natural Language Processing
University of California, Berkeley

This course examines the use of natural language processing as a set of methods for exploring and reasoning about text as data, focusing especially on the applied side of NLP — using existing NLP methods and libraries in Python in new and creative ways (rather than exploring the core algorithms underlying them.)

This is an applied course; each class period will be divided between a short lecture and in-class lab work using Jupyter notebooks (roughly 50% each). Students will be programming extensively during class, and will work in groups with other students and the instructors. Students must prepare for each class and submit preparatory materials before class; attendance in class is required.

 
10. Lectures on Machine Learning
University of British Columbia

This is a collection of course material from various courses that I've taught on machine learning at UBC, including material from over 80 lectures covering a large number of topics related to machine learning. The notation is fairly consistent across the topics which makes it easier to see relationships, and the topics are meant to be gone through in order (with the difficulty slowly increasing and concepts being defined at their first occurrence). I'm putting this in one place in case people find it useful for educational purposes.

 
Related: