KDnuggets Home » News » 2015 » Oct » Tutorials, Overviews, How-Tos » A Neural Network in 11 lines of Python ( 15:n36 )

A Neural Network in 11 lines of Python


A bare bones neural network implementation to describe the inner workings of back-propagation.



By Andrew Trask

This tutorial teaches backpropagation via a very simple toy example, a short python implementation.

Just Give Me The Code:

1.  X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
2. y = np.array([[0,1,1,0]]).T
3. syn0 = 2*np.random.random((3,4)) - 1
4. syn1 = 2*np.random.random((4,1)) - 1
5. for j in xrange(60000):
6.     l1 = 1/(1+np.exp(-(np.dot(X,syn0))))
7.     l2 = 1/(1+np.exp(-(np.dot(l1,syn1))))
8.     l2_delta = (y - l2)*(l2*(1-l2))
9.     l1_delta = l2_delta.dot(syn1.T) * (l1 * (1-l1))
10.     syn1 += l1.T.dot(l2_delta)
11.     syn0 += X.T.dot(l1_delta)

 
Part 1: A Tiny Toy Network

A neural network trained with backpropagation is attempting to use input to predict output.
predict-output
Consider trying to predict the output column given the three input columns. We could solve this problem by simply measuring statistics between the input values and the output values. If we did so, we would see that the leftmost input column is perfectly correlated with the output. Backpropagation, in its simplest form, measures statistics like this to make a model. Let's jump right in and use it to do this.

2. Layer Neural Network:

1. import numpy as np
2
3. # sigmoid function
4. def nonlin(x,deriv=False):
5.    if(deriv==True):
6.    return x*(1-x)
7.    return 1/(1+np.exp(-x))
8
9. # input dataset
10. X = np.array([ [0,0,1],
11.    [0,1,1],
12.    [1,0,1],
13.    [1,1,1] ])
14
15. # output dataset
16. y = np.array([[0,0,1,1]]).T
17
18. # seed random numbers to make calculation
19. # deterministic (just a good practice)
20. np.random.seed(1)
21
22. # initialize weights randomly with mean 0
23. syn0 = 2*np.random.random((3,1)) - 1
24
25. for iter in xrange(10000):
26
27.    # forward propagation
28.    l0 = X
29.    l1 = nonlin(np.dot(l0,syn0))
30
31.    # how much did we miss?
32.    l1_error = y - l1
33.
34.    # multiply how much we missed by the
35.    # slope of the sigmoid at the values in l1
36.    l1_delta = l1_error * nonlin(l1,True)
37
38.    # update weights
39.    syn0 += np.dot(l0.T,l1_delta)
40.
41. print "Output After Training:"
42. print l1

Output:

Output After Training:
[[ 0.00966449]
 [ 0.00786506]
 [ 0.99358898]
 [ 0.99211957]]
variable-definition

Sign Up