Silver BlogTensorFlow: Building Feed-Forward Neural Networks Step-by-Step

This article will take you through all steps required to build a simple feed-forward neural network in TensorFlow by explaining each step in details.



XOR Logic Gate using Feed-Forward Neural Network (FFNN)

 
The same concepts applied previously will also hold in all neural networks. There are some changes such as adding more layers or more neurons, changing the type of activation function, or using different loss function.

The data used as input are as follows:

Image

The network architecture to be created using TensorFlow as a FFNN with one hidden layer containing two neurons is as follows:

Image

That hidden layer accepts the inputs from the input layer. Based on its weights and biases, its two activation functions will produce two outputs. The outputs of the hidden layer will be regarded the inputs to the output layer which produces the final expected class scores of the input data.

1.  import tensorflow  
2.  import numpy  
3.    
4.  # Preparing training data (inputs-outputs)  
5.  training_inputs = tensorflow.placeholder(shape=[None, 2], dtype=tensorflow.float32)  
6.  training_outputs = tensorflow.placeholder(shape=[None, 1], dtype=tensorflow.float32) #Desired outputs for each input  
7.    
8.  """ 
9.  Hidden layer with two neurons 
10. """  
11.   
12. # Preparing neural network parameters (weights and bias) using TensorFlow Variables  
13. weights_hidden = tensorflow.Variable(tensorflow.truncated_normal(shape=[2, 2], dtype=tensorflow.float32))  
14. bias_hidden = tensorflow.Variable(tensorflow.truncated_normal(shape=[1, 2], dtype=tensorflow.float32))  
15.   
16. # Preparing inputs of the activation function  
17. af_input_hidden = tensorflow.matmul(training_inputs, weights_hidden) + bias_hidden  
18.   
19. # Activation function of the output layer neuron  
20. hidden_layer_output = tensorflow.nn.sigmoid(af_input_hidden)  
21.   
22.  
23. """ 
24. Output layer with one neuron 
25. """  
26.   
27. # Preparing neural network parameters (weights and bias) using TensorFlow Variables  
28. weights_output = tensorflow.Variable(tensorflow.truncated_normal(shape=[2, 1], dtype=tensorflow.float32))  
29. bias_output = tensorflow.Variable(tensorflow.truncated_normal(shape=[1, 1], dtype=tensorflow.float32))  
30.   
31. # Preparing inputs of the activation function  
32. af_input_output = tensorflow.matmul(hidden_layer_output, weights_output) + bias_output  
33.   
34. # Activation function of the output layer neuron  
35. predictions = tensorflow.nn.sigmoid(af_input_output)  
36.   
37.  
38. #-----------------------------------  
39.   
40. # Measuring the prediction error of the network after being trained  
41. prediction_error = 0.5 * tensorflow.reduce_sum(tensorflow.subtract(predictions, training_outputs) * tensorflow.subtract(predictions, training_inputs))  
42.   
43. # Minimizing the prediction error using gradient descent optimizer  
44. train_op = tensorflow.train.GradientDescentOptimizer(0.05).minimize(prediction_error)  
45.   
46. # Creating a TensorFlow Session  
47. sess = tensorflow.Session()  
48.   
49. # Initializing the TensorFlow Variables (weights and bias)  
50. sess.run(tensorflow.global_variables_initializer())  
51.   
52. # Training data inputs  
53. training_inputs_data = [[1.0, 0.0],  
54.                         [1.0, 1.0],  
55.                         [0.0, 1.0],  
56.                         [0.0, 0.0]]  
57.   
58. # Training data desired outputs  
59. training_outputs_data = [[1.0],  
60.                         [1.0],  
61.                         [0.0],  
62.                         [0.0]]  
63.   
64. # Training loop of the neural network  
65. for step in range(10000):  
66.     op, err, p = sess.run(fetches=[train_op, prediction_error, predictions],  
67.                           feed_dict={training_inputs: training_inputs_data,  
68.                                      training_outputs: training_outputs_data})  
69.     print(str(step), ": ", err)  
70.   
71. # Class scores of some testing data  
72. print("Expected class scroes : ", sess.run(predictions, feed_dict={training_inputs: training_inputs_data}))  
73.   
74. # Printing hidden layer weights initially generated using tf.truncated_normal()  
75. print("Hidden layer initial weights : ", sess.run(weights_hidden))  
76.   
77. # Printing hidden layer bias initially generated using tf.truncated_normal()  
78. print("Hidden layer initial weights : ", sess.run(bias_hidden))  
79.   
80. # Printing output layer weights initially generated using tf.truncated_normal()  
81. print("Output layer initial weights : ", sess.run(weights_output))  
82.   
83. # Printing output layer bias initially generated using tf.truncated_normal()  
84. print("Output layer initial weights : ", sess.run(bias_output))  
85.   
86. # Closing the TensorFlow Session to free resources  
87. sess.close()  


Here is the expected class scores of the test data and the weights and biases for both hidden and output layer.

Expected class scores :  [[ 0.75373638]  
                          [ 0.94796741]  
                          [ 0.25110725]  
                          [ 0.03870015]]  
 
Hidden layer weights :  [[ 1.68877864 -3.25296354]  
                    [ 1.36028981 -1.6849252 ]]  
 
Hidden layer weights :  [[-1.27290058  2.33101916]]  
 
Output layer weights :  [[ 2.42446136]  
                    [-5.42509556]]  
 
Output layer weights :  [[ 1.20168602]]  


For more information, visit my YouTube channel:

 
Bio: Ahmed Gad received his B.Sc. degree with excellent with honors in information technology from the Faculty of Computers and Information (FCI), Menoufia University, Egypt, in July 2015. For being ranked first in his faculty, he was recommended to work as a teaching assistant in one of the Egyptian institutes in 2015 and then in 2016 to work as a teaching assistant and a researcher in his faculty. His current research interests include deep learning, machine learning, artificial intelligence, digital signal processing, and computer vision.

Original. Reposted with permission.

Related: