Building an Image Classifier Running on Raspberry Pi

The tutorial starts by building the Physical network connecting Raspberry Pi to the PC via a router. After preparing their IPv4 addresses, SSH session is created for remotely accessing of the Raspberry Pi. After uploading the classification project using FTP, clients can access it using web browsers for classifying images.



Login

 
If the physical connection is working fine, after clicking the “OK” button, you will be asked to login in order to successfully access the remote device. The default login username and password of the Raspberry Pi are:

  • username: pi
  • password: raspberry

After entering such details correctly, the session will start successfully according to the following figure. There is just a Raspbian terminal for interacting with the Raspberry Pi OS. Note that MobaXterm allows caching the passwords used in the previous sessions so you do not have to enter the password each time you login.

You might notice that the contents of the SD card are displayed to the left of the terminal. This is because MobaXterm supports creating connections using the file transfer protocol (FTP) for uploading and downloading files. This is a useful feature that saves a lot of time. Without using FTP, we have to eject and insert the SD card multiple times for adding new files to the Raspberry Pi.

 

X11 Windowing System

 
To make it easier for beginners to interact with such OS, MobaXterm uses the X11 windowing system which provides a graphical user interface (GUI) to interact with the OS rather than using the command-line. X11 provides a framework for displaying GUI for the Linux operating systems similar to that of Microsoft Windows. We can open the access the GUI using the "startlxde" command as shown in the next figure.

At this time, we have access to the Raspberry Pi using SSH and able to use a GUI for interacting with it. This is wonderful. Using Raspberry Pi that just costs around 50$ we have an interface like what we see in our PCs. Sure it will not support everything in our machines due to the limited memory, SD card storage, and CPU speed.

 

Image Classification

 
Next, we can start building the image classifier. The complete classifier is built from scratch in this book “Ahmed Fawzy Gad, Practical Computer Vision Applications Using Deep Learning with CNNs, Apress, 2019, 978-1484241660”.

The classifier is trained using 4 classes from the Fruits 360 dataset. The idea is to use Flask for creating a web application existing in a web server available at Raspberry Pi in which the trained classifier exists. Users can access it for uploading and classifying their own images.

There is a folder named “FruitsApp”, listed in the output of the FTP, which is uploaded previously to the Raspberry Pi. It contains the project files. The project has a main Python file named “flaskApp.py” implementing the Flask application. There are other supplemental HTML, CSS, and JavaScript files for building the interface of the application. In order to run the application, the python “flaskApp.py” file can be executed from the terminal according to the following figure.

The following Python code has the implementation of the Flask application. According to the last line of the code, the application can be accessed using a web browser by visiting the IP address assigned to the Raspberry Pi and port 7777. As a result, the homepage of the application is http://192.168.1.19/7777.

import flask, werkzeug, PIL.Image, numpy

app = flask.Flask(import_name="FruitsApp")

def extractFeatures():
    img = flask.request.files["img"]
    img_name = img.filename
    img_secure_name = werkzeug.secure_filename(img_name)
    img.save(img_secure_name)
    print("Image Uploaded successfully.")

    img_features = extract_features(image_path=img_secure_name)
    print("Features extracted successfully.")

    weights_mat = numpy.load("weights.npy")

    predicted_label = predict_outputs(weights_mat, img_features, activation="sigmoid")

    class_labels = ["Apple", "Raspberry", "Mango", "Lemon"]
    predicted_class = class_labels[predicted_label]
    return flask.render_template(template_name_or_list="result.html", predicted_class=predicted_class)

app.add_url_rule(rule="/extract", view_func=extractFeatures, methods=["POST"], endpoint="extract")

def homepage():
    return flask.render_template(template_name_or_list="home.html")

app.add_url_rule(rule="/", view_func=homepage)

app.run(host="192.168.1.19", port=7777, debug=True)


Once the user visits the homepage, an HTML page will be displayed asking for uploading an image. Once an image is uploaded, the “extractFeatures()” function will be called. It extracts the features, predicts the class label, and renders the result in another HTML page according to the following figure. The uploaded image class label is “Apple”. For more details, look at the book in [1].

 

For More Details

 
[1] “Ahmed Fawzy Gad, Practical Computer Vision Applications Using Deep Learning with CNNs, Apress, 2019, 978-1484241660”. It is available at these links:

 

For Contacting the Author

 

 
Bio: Ahmed Gad received his B.Sc. degree with excellent with honors in information technology from the Faculty of Computers and Information (FCI), Menoufia University, Egypt, in July 2015. For being ranked first in his faculty, he was recommended to work as a teaching assistant in one of the Egyptian institutes in 2015 and then in 2016 to work as a teaching assistant and a researcher in his faculty. His current research interests include deep learning, machine learning, artificial intelligence, digital signal processing, and computer vision.

Original. Reposted with permission.

Related: