Silver BlogBuilding Neural Networks with PyTorch in Google Colab

Combining PyTorch and Google's cloud-based Colab notebook environment can be a good solution for building neural networks with free access to GPUs. This article demonstrates how to do just that.



Deep Learning with PyTorch in Google Colab

 
PyTorch and Google Colab have become synonymous with Deep Learning as they provide people with an easy and affordable way to quickly get started building their own neural networks and training models. GPUs aren’t cheap, which makes building your own custom workstation challenging for many. Although the cost of a deep learning workstation can be a barrier to many, these systems have become more affordable recently thanks to the lower cost of NVIDIA’s new RTX 30 series.

Even with more affordable options of having your own deep learning system on hand, many people still flock to using PyTorch and Google Colab as they get used to working with deep learning projects.

PyTorch and Google Colab Logos
Source

 

PyTorch and Google Colab are Powerful for Developing Neural Networks 

 
PyTorch was developed by Facebook and has become famous among the Deep Learning Research Community. It allows for parallel processing and has an easily readable syntax that caused an uptick in adoption. PyTorch is generally easier to learn and lighter to work with than TensorFlow, and is great for quick projects and building rapid prototypes. Many use PyTorch for computer vision and natural language processing (NLP) applications.

Google Colab was developed by Google to help the masses access powerful GPU resources to run deep learning experiments. It offers GPU and TPU support and integration with Google Drive for storage. These reasons make it a great choice for building simple neural networks, especially compared to something like Random Forest.

 

Using Google Colab

 

Relationships with Google Colab
Source

 

Google Colab offers a combination of environment setup options with a Jupyter-like interface, GPU support (including Free and Paid options), storage, and code documenting ability all in one application. Data Scientists can have an all-inclusive Deep Learning experience without having to spend a fortune on GPU support.

Documenting code is important for sharing code between people, and it’s important to have a single neutral place to store data science projects. The Jupyter notebook interface combined with GPU instances allow for a nice reproducible environment. You can also import notebooks from GitHub or upload your own.

An important note: since Python 2 has become outdated, it is no longer available on Colab. However, there is still legacy code running Python 2. Thankfully, Colab has a fix for this, which you can use to still run Python 2. If you give it a try, you’ll see there’s a warning that Python 2 is officially deprecated in Google Colab.

 

Using PyTorch

 

PyTorch Logo
Source

 

PyTorch is functionally like any other deep learning library, wherein it offers a suite of modules to build deep learning models. A difference will be the PyTorch Tensor Class which is similar to the Numpy ndarray.

A major plus for Tensors is that is has inherent GPU support. Tensors can run on either a CPU or GPU. To run on a GPUm we can just change the environment to use a GPU using the built-in CUDA module in PyTorch. This makes switching between GPU and CPU easy.

Data presented to a neural network has to be in a numerical format. Using PyTorch, we do this by representing data as a Tensor. A Tensor is a data structure which can store data in N dimensions; a Vector is a 1 dimensional Tensor, a matrix is a 2 dimensional Tensor. In layman’s terms, tensors can store in higher dimensions compared to a vector or a matrix.

 

Why is a GPU Preferred?

 

PyTorch Compilation of Technologies
Source

 

Tensor processing libraries can be used to compute a multitude of calculations, but when using a 1-core GPU, it takes a lot of time for the calculations to compile.

This is where Google Colab comes in. It is technically free, but probably not suited towards large scale industrial deep learning. It is geared more towards beginner to mid-level practitioners. It does offer a paid service for larger scale projects, such as being connected for up to 24 hours instead of 12 hours in the free version, and can provide direct access to more powerful resources if needed.

 

How to Code a Basic Neural Network

 
In order to get started building a basic neural network, we need to install PyTorch in the Google Colab environment. This can be done by running the following pip command and by using the rest of the code below:

!pip3 install torch torchvision

# Import libraries
import torch
import torchvision
from torchvision import transforms, datasets
Import torch.nn as nn
Import torch.nn.functional as F
import torch.optim as optim

# Create test and training sets
train = datasets.MNIST('', train=True, download=True,
                       transform=transforms.Compose([
                           transforms.ToTensor()
                       ]))

test = datasets.MNIST('', train=False, download=True,
                       transform=transforms.Compose([
                           transforms.ToTensor()
                       ]))


# This section will shuffle our input/training data so that we have a randomized shuffle of our data and do not risk feeding data with a pattern. Anorther objective here is to send the data in batches. This is a good step to practice in order to make sure the neural network does not overfit our data. NN’s are too prone to overfitting just because of the exorbitant amount of data that is required. For each batch size, the neural network will run a back propagation for new updated weights to try and decrease loss each time.
trainset = torch.utils.data.DataLoader(train, batch_size=10, shuffle=True)
testset = torch.utils.data.DataLoader(test, batch_size=10, shuffle=False)


# Initialize our neural net
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(28*28, 64)
        self.fc2 = nn.Linear(64, 64)
        self.fc3 = nn.Linear(64, 64)
        self.fc4 = nn.Linear(64, 10)

    def forward(self, x):
        x = self.fc1(x)
        x = self.fc2(x)
        x = self.fc3(x)
        x = self.fc4(x)
        return F.log_softmax(x, dim=1)

net = Net()

print(net)

### Output:
### Net(
###  (fc1): Linear(in_features=784, out_features=64, bias=True)
###  (fc2): Linear(in_features=64, out_features=64, bias=True)
###  (fc3): Linear(in_features=64, out_features=64, bias=True)
###  (fc4): Linear(in_features=64, out_features=10, bias=True)
###)


# Calculate our loss 
loss_function = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)

for epoch in range(5): # we use 5 epochs
    for data in trainset:  # `data` is a batch of data
        X, y = data  # X is the batch of features, y is the batch of targets.

        net.zero_grad()  # sets gradients to 0 before calculating loss.

        output = net(X.view(-1,784))  # pass in the reshaped batch (recall they are 28x28 atm, -1 is needed to show that output can be n-dimensions. This is PyTorch exclusive syntax)

        loss = F.nll_loss(output, y)  # calc and grab the loss value

        loss.backward()  # apply this loss backwards thru the network's parameters

        optimizer.step()  # attempt to optimize weights to account for loss/gradients
    print(loss)  

### Output:
### tensor(0.6039, grad_fn=)
### tensor(0.1082, grad_fn=)
### tensor(0.0194, grad_fn=)
### tensor(0.4282, grad_fn=)
### tensor(0.0063, grad_fn=)


# Get the Accuracy
correct = 0
total = 0

with torch.no_grad():
    for data in testset:
        X, y = data
        output = net(X.view(-1,784))
        #print(output)
        for idx, i in enumerate(output):
            #print(torch.argmax(i), y[idx])
            if torch.argmax(i) == y[idx]:
                correct += 1
            total += 1

print("Accuracy: ", round(correct/total, 3))

### Output: 
### Accuracy:  0.915

 

 

PyTorch & Google Colab Are Great Choices in Data Science

 
PyTorch and Google Colab are useful, powerful, and simple choices and have been widely adopted among the data science community despite PyTorch only being released in 2017 (3 years ago!) and Google Colab in 2018 (2 years ago!).

They have been shown to be great choices in Deep Learning, and as new developments are being released, they may become the best tools to use. Both are backed by two of the biggest names in Tech: Facebook and Google. PyTorch offers a comprehensive set of tools and modules to simplify the deep learning process as much as possible, while Google Colab offers an environment to manage your coding and reproducibility for your projects.

If you’re already using either of these, what have you found to be most valuable to your own work?

 
Original. Reposted with permission.

Related: