Topics: AI | Data Science | Data Visualization | Deep Learning | Machine Learning | NLP | Python | R | Statistics

KDnuggets Home » News » 2017 » Oct » Tutorials, Overviews » Neural Networks: Innumerable Architectures, One Fundamental Idea ( 17:n39 )

Neural Networks: Innumerable Architectures, One Fundamental Idea

  http likes 79

At the end of this post, you’ll be able to implement a neural network to identify handwritten digits using the MNIST dataset and have a rough time idea about how to build your own neural networks.

Okay... so what’s exactly happening over here?

In the following diagram, I’ve tried my best to explain how the neural network works to minimize the cost function and make the prediction as accurate as possible during the forward pass. You’ll have a clearer picture of the working when you reach the next question where we jump into the code.


Umm... but how am I supposed to code this stuff?


Code for implementing a Neural Network using Tensorflow for identifying handwritten digits

Every single bit of code has been explained in detail in the form of comments alongside the lines. Read the code with the comments. Try to copy, paste,execute and experiment with the code to see the output for yourself.

I have made it so easy to understand that you can visualize what’s going on even if you’re not good at coding using Python!


So, what do I get out of all this?

You should get something like the following screenshot once you save the python file and you open the folder using command prompt where your python code is stored and then type -


Points to Remember -

  1. This takes time depending​ on your hardware as well as the process you followed while installing Tensorflow (pip install gives you certain warnings, which are not errors at all). You’ll get your output inspite of those warnings. They’re just to tell you that you could’ve used your hardware features to the fullest to make computations faster -
  2. You won’t get the same accuracy values during new executions as random.normal() keeps changing values for every execution. However, same random values can be used by the process called ‘seeding’.
  3. I have deliberately not included saving our trained model so that it doesn’t get too much to digest for everyone. We shall be discussing about saving our trained model in the upcoming posts, so that we can start training our model from where we left thereby reducing the time to train it right from the scratch. However, you may try googling about ‘pickling’ and Tensorflow’s ‘saver’ methods.


Too much of complicated stuff?

Fret not! The developers and researchers have been very benevolent. When we use ML and Deep Learning libraries to solve real life problems, most of these scary looking variables and formulas will be taken care of for us. So cool, right?

But that certainly doesn’t mean at all that you forget all this basic stuff. It might sound lunatic but it’ll be great if after you’ve studied the code, you try to type this post’s code on your own without seeing the code and experiment with it by changing certain parts atleast once. You might wonder… How on Earth am I supposed to remember so many complicated things!

Investing just a little time in this exercise will help you more than you can imagine when we get into RNNs, CNNs, Reinforcement Learning and many other architectures that I don’t want to name and scare you even more! At that time, coding and understanding those architectures would be a cakewalk for all of you if you do this.


How can I make use of neural networks to design my own model ?

According to Hugo Larochelle’s very recently released amazing slides (link at the end of the post, don’t forget to check it out!), there are two ways​ for creating new architectures —

  1. Imagine how the human brain would have handled the same problem.
  2. Making a neural network out of an existing algorithm.

This might seem very easy to say but really difficult to actually do it but as you proceed with the posts one by one, eventually you’ll be able to make your very own neural network.


Wasn’t too difficult, right?

You just need the urge to learn. Seriously, that’s all you need. The entire community is here to help. It’s so damn simple that if you get stuck at some point, just blindly copy and paste the error in the Google search box and hit go! There, you have your answer! There’s a solution to almost every query on Stack Overflow to help you out. All you need to do is ask!

An interesting quote by Sebastian Thrun, the Founder and President at Udacity, that pretty much sums up what’s round the corner for all of us -

“I’m really looking forward to a time when generations after us look back and say how ridiculous it was that humans were driving cars.”

Just before I end this post, I would like to mention over here that I have decided to make everything all by myself - all the graphs, flowcharts, and anything else included in my posts that could make understanding​ a particular concept easier for all my readers, just like I’ve been doing till now.

It would’ve​ been​ so convenient to just pick up pictures by googling whatever I wish to add in my posts but then I couldn’t call my posts completely mine, right?
Making these diagrams does take time but then, somewhere down the line I want to feel proud of the content I provided to all my readers and don’t want a single person to feel that their time was wasted reading even a single post on my blog.

On that note, I wind up this post. We’ll be diving deep into a comparative and comprehensive study of Convolutional and Recurrent Neural Networks in the upcoming posts. Until then, stay tuned guys! :)

I’m leaving two links that I find quite good, for those readers who want a glossary kind of a site with them -

Check out Hugo Larochelle’s latest awesome slides about ML and Deep Learning -

Bio: Raksham Pandey is a Data Scientist in the making... Electrified by AI... Passion for Deep Learning to solve problems that matter.

Original. Reposted with permission.


Sign Up

By subscribing you accept KDnuggets Privacy Policy