Neural Networks, Step 1: Where to Begin with Neural Nets & Deep Learning

This is a short post for beginners learning neural networks, covering several essential neural networks concepts.



This is a short supplementary post for beginners learning neural networks. It does not intend to provide a complete learning roadmap, but the contents included should give a short introduction to several essential neural networks concepts.

The first resource covers defining some key neural network terminology.

Deep Learning Key Terms, Explained

As defined above, deep learning is the process of applying deep neural network technologies to solve problems. Deep neural networks are neural networks with one hidden layer minimum. Like data mining, deep learning refers to a process, which employs deep neural network architectures, which are particular types of machine learning algorithms.

Gradient descent

This pair of posts covers a few of the most important foundational concepts of neural networks at a very introductory level, without any of the math. If you can understand the high level concepts contained within these posts, you should be ready for the resources that follow.

Neural Network Foundations, Explained: Activation Function

Forward propagation is the the process of multiplying the various input values of a particular neuron by their associated weights, summing the results, and scaling or "squashing" the values back between a given range before passing these signals on to the next layer of neurons. This, in turn, affects the weighted input value sums of the following layer, and so on, which then affects the computation of new weights and their distribution backward through the network. Ultimately, of course, this all affects the final output value(s) of the neural network. The activation function keeps values forward to subsequent layers within an acceptable and useful range, and forwards the output.

Neural Network Foundations, Explained: Updating Weights with Gradient Descent & Backpropagation

Recall that in order for a neural networks to learn, weights associated with neuron connections must be updated after forward passes of data through the network. These weights are adjusted to help reconcile the differences between the actual and predicted outcomes for subsequent forward passes. But how, exactly, do the weights get adjusted?

This next resource outlines a more detailed plan to understanding all of the above concepts.

7 Steps to Understanding Deep Learning

A stark and honest disclaimer: deep learning is a complex and quickly-evolving field of both breadth and depth (pun unintended?), and as such this post does not claim to be an all-inclusive manual to becoming a deep learning expert; such a transformation would take greater time, many additional resources, and lots of practice building and testing models. I do, however, believe that utilizing the resources herein could help get you started on just such a path.

Finally, this selection of resources goes a bit further, combining some more advanced learning materials, talks, and interviews.

5 Free Resources for Furthering Your Understanding of Deep Learning

Interested in furthering your understanding of neural networks and deep learning, above and beyond the basic introductory tutorials and videos out there? This post includes 5 specific video-based options for doing just that, collectively consisting of many, many hours of insights. If you already possess some basic knowledge of neural networks, it may be time to jump in and tackle some more advanced concepts.

 
Related: