Aller au contenu

Lab Session 4 - Deep Learning Tutorial

Note

Before you begin, make sure you have downloaded the latest update of the course slides from here, and keep them close while doing the lab.

Warning

We will work with pytorch. If you don't have a GPU (which is very likely to be the case if you use a laptop), we recommend installing pytorch in "cpu only" mode as it will be much smaller to download. See pytorch installation instructions and select "cpu only" as compute platform.

If you run this notebook on Google Colab, you'll have access to a GPU.

Objective of the lab

This session is a Deep Learning tutorial in Pytorch.

PyTorch is a Python-based scientific computing package serving two broad purposes:

  • A replacement for NumPy to use the power of GPUs and other accelerators.
  • An automatic differentiation library that is useful to implement Deep Learning architectures.

Note

PyTorch is one of the standard librairies to define neural networks. This tutorial is loosely based on the 60 min blitz Deep Learning with Pytorch but with many original parts.

The tutorial is structured as follows:

  1. Tensors in Pytorch
  2. Understanding the training loop and automatic differentiation
  3. Defining a Deep Learning Architecture
  4. Training a Classifier on CIFAR10, a standard image classification dataset
  5. Study specificities of Text, Audio and Image modalities

Note

Copy / modify / playaround with the code snippets that we provide. In part 4, you are expected to complete some empty cells to successfully train and test your net.

1. Tensors

Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.

Tensors are similar to NumPy’s ndarrays, except that tensors can run on GPUs or other specialized hardware to accelerate computing. If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along in this quick API walkthrough.

1
2
import torch
import numpy as np

Tensor Initialization

Tensors can be initialized in various ways. Take a look at the following examples:

Directly from data

Tensors can be created directly from data. The data type is automatically inferred.

1
2
data = [[1, 2],[3, 4]]
x_data = torch.tensor(data)

From a NumPy array

Tensors can be created from NumPy arrays (and vice versa).

1
2
np_array = np.array(data)
x_np = torch.from_numpy(np_array)

From another tensor:

The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.

1
2
3
4
5
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")

x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")

With random or constant values:

shape is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.

1
2
3
4
5
6
7
8
shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)

print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")

Tensor Attributes

Tensor attributes describe their shape, datatype, and the device on which they are stored.

1
2
3
4
5
tensor = torch.rand(3,4)

print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")

Tensor Operations

Over 100 tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random sampling, and more are comprehensively described here.

Try out some of the operations from the list. If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.

Standard numpy-like indexing and slicing:

1
2
3
tensor = torch.ones(4, 4)
tensor[:,1] = 0
print(tensor)

Joining tensors You can use torch.cat to concatenate a sequence of tensors along a given dimension. See also torch.stack, another tensor joining operation that is subtly different from torch.cat.

1
2
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)

Multiplying tensors

1
2
3
4
# This computes the element-wise product
print(f"tensor.mul(tensor) \n {tensor.mul(tensor)} \n")
# Alternative syntax:
print(f"tensor * tensor \n {tensor * tensor}")

This computes the matrix multiplication between two tensors

1
2
3
print(f"tensor.matmul(tensor.T) \n {tensor.matmul(tensor.T)} \n")
# Alternative syntax:
print(f"tensor @ tensor.T \n {tensor @ tensor.T}")

Bridge to NumPy

Tensor to NumPy array

1
2
3
4
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")

NumPy array to Tensor

1
2
n = np.ones(5)
t = torch.from_numpy(n)

Changes in the NumPy array reflects in the tensor.

1
2
3
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")

2. Understanding the training loop and automatic differentation with torch.autograd

torch.autograd is PyTorch’s automatic differentiation engine that powers Deep Learning training. In this section, you will get a conceptual understanding of how autograd is used to train a deep learning architecture.

Background

Neural networks (NNs) are basic building blocks of deep learning models, and are implemented in pytorch as a collection of nested functions that are executed on some input data. These functions are defined by parameters (consisting of weights and biases), which in PyTorch are stored in tensors.

Training happens in two steps:

Forward Propagation: In forward prop, the deep learning model is applied to the input data to generate an output . It runs the input data through each of its functions to generate the output.

Backward Propagation: In backprop, the model parameters are adjusted proportionately to the error, i.e. the discrepancy between the obtained output and the ground truth. It does this by traversing backwards from the output, collecting the derivatives of the error with respect to the parameters of the functions (gradients), and optimizing the parameters using gradient descent. For a more detailed walkthrough of backprop, check out this [video from 3Blue1Brown] (https://www.youtube.com/watch?v=tIeHLnjs5U8).

Usage in PyTorch

Let's take a look at a single training step. For this example, we load a pretrained resnet18 model from torchvision. We create a random data tensor to represent a single image with 3 channels, and height & width of 224, and its corresponding label initialized to some random values.

1
2
3
4
import torch, torchvision
model = torchvision.models.resnet18(weights="DEFAULT")
data = torch.rand(1, 3, 224, 224) # 1 image with 3 channels that is 224x224. The first dimension corresponds to the batch size, here 1.
labels = torch.rand(1, 1000)

Next, we run the input data through the model through each of its layers to make a prediction. This is the forward pass.

1
prediction = model(data) # forward pass

We use the model's prediction and the corresponding label to calculate the error (loss).

N.B. : As an example here, the loss is simply defined as the difference between the prediction and labels, but in practice, other loss functions are used such as the Cross Entropy Loss for classification problems, or the Mean Square Error for regression problems.

The next step is to backpropagate this error through the network. Backward propagation is kicked off when we call .backward() on the error tensor. Autograd then calculates and stores the gradients for each model parameter in the parameter's .grad attribute.

1
2
loss = (prediction - labels).sum()
loss.backward() # backward pass

Next, we load an optimizer, in this case stochastic gradient descent SGD with a learning rate of 0.01 and momentum of 0.9. We register all the parameters of the model in the optimizer.

Finally, we call .step() to initiate gradient descent. The optimizer adjusts each parameter by its gradient stored in .grad.

1
2
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
optim.step() #gradient descent

Differentiation in Autograd

Let's take a look at how autograd collects gradients. We create two tensors a and b with requires_grad=True. This signals to autograd that every operation on them should be tracked.

1
2
3
4
import torch

a = torch.tensor([2., 3.], requires_grad=True)
b = torch.tensor([6., 4.], requires_grad=True)

We create another tensor Q from a and b.

\[\begin{align}Q = 3a^3 - b^2\end{align}\]
1
Q = 3*a**3 - b**2

Let's assume a and b to be parameters of a model, and Q to be the error. During training, we want gradients of the error w.r.t. parameters, i.e.

\[\begin{align}\frac{\partial Q}{\partial a} = 9a^2\end{align}\]
\[\begin{align}\frac{\partial Q}{\partial b} = -2b\end{align}\]

When we call .backward() on Q, autograd calculates these gradients and stores them in the respective tensors' .grad attribute.

We need to explicitly pass a gradient argument in Q.backward() because it is a vector. gradient is a tensor of the same shape as Q, and it represents the gradient of Q w.r.t. itself, i.e.

\[\begin{align}\frac{dQ}{dQ} = 1\end{align}\]
1
2
3
4
5
6
7
8
external_grad = torch.tensor([1., 1.])

Q.backward(gradient=external_grad)
# Gradients are now deposited in ``a.grad`` and ``b.grad``

# check if collected gradients are correct
print(9*a**2 == a.grad)
print(-2*b == b.grad)

Computational Graph

Conceptually, autograd keeps a record of data (tensors) & all executed operations (along with the resulting new tensors) in a directed acyclic graph (DAG) consisting of Function objects. In this DAG, leaves are the input tensors, roots are the output tensors. By tracing this graph from roots to leaves, you can automatically compute the gradients using the chain rule.

In a forward pass, autograd does two things simultaneously:

  • run the requested operation to compute a resulting tensor, and
  • maintain the operation’s gradient function in the DAG.

The backward pass kicks off when .backward() is called on the DAG root. autograd then:

  • computes the gradients from each .grad_fn,
  • accumulates them in the respective tensor’s .grad attribute, and
  • using the chain rule, propagates all the way to the leaf tensors.

Note

DAGs are dynamic in PyTorch** An important thing to note is that the graph is recreated from scratch; after each .backward() call, autograd starts populating a new graph. This is exactly what allows you to use control flow statements in your model; you can change the shape, size and operations at every iteration if needed.

Exclusion from the DAG

torch.autograd tracks operations on all tensors which have their requires_grad flag set to True. For tensors that don’t require gradients, setting this attribute to False excludes it from the gradient computation DAG.

The output tensor of an operation will require gradients even if only a single input tensor has requires_grad=True.

1
2
3
4
5
6
7
8
x = torch.rand(5, 5)
y = torch.rand(5, 5)
z = torch.rand((5, 5), requires_grad=True)

a = x + y
print(f"Does `a` require gradients? : {a.requires_grad}")
b = x + z
print(f"Does `b` require gradients?: {b.requires_grad}")

In a NN, parameters that don't compute gradients are usually called frozen parameters. It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters (this offers some performance benefits by reducing autograd computations).

Another common usecase where exclusion from the DAG is important is for transfer learning (see finetuning a pretrained network)

In transfer learning, we freeze most of the model and typically only modify the classifier layers (last fully connected layers) to make predictions on new labels. Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.

1
2
3
4
5
6
7
from torch import nn, optim

model = torchvision.models.resnet18(weights='DEFAULT')

# Freeze all the parameters in the network
for param in model.parameters():
    param.requires_grad = False

Let's say we want to finetune the model on a new dataset with 10 labels. In resnet, the classifier is the last linear layer model.fc. We can simply replace it with a new linear layer (unfrozen by default) that acts as our classifier.

1
model.fc = nn.Linear(512, 10)

Now all parameters in the model, except the parameters of model.fc, are frozen. The only parameters that compute gradients are the weights and bias of model.fc. This means also that during traininig, the only parameters that are computing gradients (and hence updated in gradient descent) are the weights and bias of the classifier (model.fc).

NOTE:

The same exclusionary functionality is available as a context manager in torch.no_grad(), that can be used for Transfer Learning (i.e. keeping parts of a model fixed while training another model).

3. Defining a Deep Learning Model

A deep learning model can be constructed using the modules from the torch.nn package.

Now that you had a glimpse of autograd, nn depends on autograd to define models and differentiate them. An nn.Module contains layers, and a method forward(input) that returns the output.

A deep learning model takes the input, feeds it through several layers one after the other, and then finally gives the output.

A typical training procedure for a deep learning model is as follows:

  • Define the model that has some learnable parameters (or weights)
  • Iterate over a dataset of inputs
  • Process input through the model
  • Compute the loss (how far is the output from being correct)
  • Propagate gradients back into the model’s parameters
  • Update the weights of the model, typically using a simple update rule:
\[\begin{align}weight = weight - learningRate * gradient\end{align}\]

Define the model

Let’s define a simple deep learning model :

  • take as input a greyscale image (1 input channel),
  • processes it with 2 layers of 2D convolutional filters (Conv2d), each followed by ReLu and 2D max pooling,
  • followed by a 3 layer perceptron, which is composed of Linear units and ReLu.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # 1 input image channel, 6 output channels, 3x3 square convolution
        # kernel
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=3)
        self.conv2 = nn.Conv2d(in_channels=6, out_channels=16, kernel_size=3)
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(in_features = 16 * 6 * 6, out_features = 120)  # 6*6 from image dimension, 16 for channels
        self.fc2 = nn.Linear(in_features = 120, out_features = 84)
        self.fc3 = nn.Linear(in_features = 84, out_features = 10)

    def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # If the size is a square you can only specify a single number
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features


net = Net()
print(net)

The forward function of the model class is the one that implements the forward pass, which is the sequence of operations from input to output. It is possible to use all operators from nn.functional and modules defined in nn, as well as operations on tensors.

The backward function (where gradients are computed) is automatically defined for you using autograd.

The learnable parameters of a model are returned by net.parameters()

1
2
3
params = list(net.parameters())
print(len(params))
print(params[0].size())  # conv1's .weight

Let's try a random 32x32 input. Note: expected input size of this net is 32x32.

1
2
3
4
testinput = torch.randn(1, 1, 32, 32)
out = net(testinput)
print(out)
print("Shape of the ouput: ", out.shape)

In order to better understand the inner operations of the model, let's break down the forward pass layer by layer, and print the successive shapes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# first layer of the model, as defined in the forward function, but we call the conv1 module from the model definition
x = testinput
print(f"Initial shape of the input : {x.shape}")
x = (F.relu(net.conv1(x)))
print(f"Shape after the first convolutional layer (conv1, relu) : {x.shape}")

x = F.max_pool2d(x, (2, 2))

print(f"Shape after max pooling (max_pool2d) with a 2x2 window: {x.shape}")
# Second layer 
x = F.max_pool2d(F.relu(net.conv2(x)), 2)

print(f"Shape after the second convolutional layer and 2x2 max pool (conv2, relu, max_pool2d) : {x.shape}")


x = x.view(-1, net.num_flat_features(x))
print(f"Shape after reshaping (flattening to a 1D vector) : {x.shape}")

x = F.relu(net.fc1(x))
print(f"Shape after FC1 : {x.shape}")

x = F.relu(net.fc2(x))
print(f"Shape after FC2 : {x.shape}")

x = net.fc3(x)
print(f"Shape after FC3, output of the model : {x.shape}")    

Try changing the size (32x32) of the image input and see what happens !

1
# to complete

Pytorch has a method to automatically put all the gradients to zero, which is necessary before a forward pass, in order to initialize the gradients.

1
net.zero_grad()

NOTE:

torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are a mini-batch of samples, and not a single sample.

For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width.

If you have a single sample, just use input.unsqueeze(0) to add a fake batch dimension.

Before proceeding further, let's recap all the classes you’ve seen so far.

Recap:

  • torch.Tensor - A multi-dimensional array with support for autograd operations like backward(). Also holds the gradient w.r.t. the tensor.
  • nn.Module - Neural network module, basic blocks for defining a model. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.
  • nn.Parameter - A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module.
  • autograd.Function - Implements forward and backward definitions of an autograd operation. Every Tensor operation creates at least a single Function node that connects to functions that created a Tensor and encodes its history.

At this point, we covered:

  • Defining a model
  • Processing inputs and calling backward

Still Left:

  • Computing the loss
  • Updating the weights of the network

Loss Function

A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target.

There are several different [loss functions] (https://pytorch.org/docs/nn.html#loss-functions) under the nn package. A simple loss is: nn.MSELoss which computes the mean-squared error between the input and the target. This loss is adapted for regression problems when the targets are continuous.

For example:

1
2
3
4
5
6
7
output = net(testinput)
target = torch.randn(10)  # a dummy target, for example
target = target.view(1, -1)  # make it the same shape as output
criterion = nn.MSELoss()

loss = criterion(output, target)
print(loss)

Now, if you follow loss in the backward direction, using its .grad_fn attribute, you will see a graph of computations that looks like this:

::

1
2
3
4
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
      -> view -> linear -> relu -> linear -> relu -> linear
      -> MSELoss
      -> loss

So, when we call loss.backward(), the whole graph is differentiated w.r.t. the loss, and all Tensors in the graph that has requires_grad=True will have their .grad Tensor accumulated with the gradient.

For illustration, let us follow a few steps backward:

1
2
3
print(loss.grad_fn)  # MSELoss
print(loss.grad_fn.next_functions[0][0])  # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])  # ReLU

Backprop

To backpropagate the error all we have to do is to loss.backward(). You need to clear the existing gradients though, else gradients will be accumulated to existing gradients.

Now we shall call loss.backward(), and have a look at conv1's bias gradients before and after the backward.

1
2
3
4
5
6
7
8
9
net.zero_grad()     # zeroes the gradient buffers of all parameters

print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)

loss.backward()

print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)

In practice, you will rarely need to have a look at the gradients when training a deep model, but it's good to know how to do it.

In most applications, defining the loss and performing the backward propagation process using loss.backward() will be sufficient.

Now, we have seen how to use loss functions.

Read Later:

The neural network package contains various modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is here.

The only thing left to learn is:

  • Updating the weights of the network

Update the weights

The simplest update rule used in practice is the Stochastic Gradient Descent (SGD):

\[\begin{align}weight = weight - learningRate * gradient\end{align}\]

We could implement this using a (pseudo) Python code such as this one :

1
2
3
learning_rate = 0.01
for f in net.parameters():
    f.data.sub_(f.grad.data * learning_rate)

However, there are various update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. To enable this, pytorch implements updates rules in "optimizers" in the torch.optim module. Using it is very simple:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import torch.optim as optim

# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01) 

# in your training loop:
optimizer.zero_grad()   # zero the gradient buffers
output = net(testinput)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update

You have seen how to define a deep learning model, compute loss and make updates to the weights of the network. One thing is still missing:

Input Data

Generally, when you have to deal with image, text, audio or video data, you can use standard python packages that load data into a numpy array. Then you can convert this array into a torch.Tensor.

  • For images, packages such as Pillow (PIL), OpenCV are useful
  • For audio, packages such as scipy and librosa or torchaudio
  • For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful

Specifically for vision, pytorch has created a package called torchvision, that has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images, torchvision.datasets and torch.utils.data.DataLoader.

There is a similar package for audio, which is called torchaudio.

This provides a huge convenience and avoids writing boilerplate code.

For this tutorial, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.

This is it. You are finally ready to:

4. Train a Classifier!

We will do the following steps in order:

a. Load and normalizing the CIFAR10 training and test datasets using torchvision

b. Define a Convolutional Neural Network

c. Define a loss function

d. Train the network on the training data

e. Test the network on the test data

a. Loading and normalizing CIFAR10

Using torchvision, it’s extremely easy to load CIFAR10.

1
2
3
import torch
import torchvision
import torchvision.transforms as transforms

The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1].

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=8,
                                          shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=8,
                                         shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

NOTE:

If running on Windows you get a BrokenPipeError, try setting the num_worker of torch.utils.data.DataLoader() to 0.

Let us show some of the training images, for fun.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import matplotlib.pyplot as plt
import numpy as np

# functions to show an image


def imshow(img):
    img = img / 2 + 0.5     # unnormalize
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))
    plt.show()


# get some random training images and check the size
for images,labels in trainloader:
    print('batch size:', images.size(0))
    print('color channels :', images.size(1))
    print('Image size:'+ str(images.size(2))+ 'x'+ str(images.size(3)))
    break #we just want to fetch the first batch

# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s\t' % classes[labels[j]] for j in range(4)))

png

1
GroundTruth:    cat  ship  ship plane  frog  frog   car  frog

b. Define a Convolutional Neural Network

Copy the neural network from the Neural Networks section before and modify it to take 3-channel (color) images, instead of 1-channel (black and white) images as it was defined.

NOTE:

Pay attention to the in/out features dimensions, especially at the transition between a Convolution (Conv) and Fully connected (fc) linear layer

Recap:

Formula to get the output features size h_out of a 2D Conv layer given h_in (size features input), k (convolutional kernel size), p (zero padding), and s (stride) (more details in Conv2D documentation)

\[\begin{align}h_{out}=\frac{h_{in} - 2*p - k}{s} + 1 \end{align}\]

Remember that the feature size is divided by the MaxPool2d kernel size when passing through a 2D max pooling layer!

1
# TO BE COMPLETED !!!

c. Define a Loss function and optimizer

Let's use a Classification Cross-Entropy loss and SGD with momentum.

1
# TO BE COMPLETED !!!

d. Train the network

This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize. In this tutorial we will consider a small number of iterations over the dataset n_epochs

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
n_epochs=2

for epoch in range(n_epochs):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[epoch %d, batch %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

Let's quickly save our trained model (see [here] (https://pytorch.org/docs/stable/notes/serialization.html) for more details on saving PyTorch models).

1
2
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)

e. Test the network on the test data

We have trained the network for n_epochs passes over the training dataset. But we need to check if the network has learnt anything at all.

We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.

Okay, first step. Let us display an image from the test set to get familiar.

1
2
3
4
5
for images,labels in testloader:
    # print images
    imshow(torchvision.utils.make_grid(images))
    print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(images.shape[0])))
    break # here again we just want to fetch the first batch

Next, let's load back in our saved model (note: saving and re-loading the model wasn't necessary here, we only did it to illustrate how to do so):

1
2
net = Net()
net.load_state_dict(torch.load(PATH))

Okay, now let us see what the neural network thinks these examples above are.

The outputs are energies for the 10 classes. The higher the energy for a class, the more the network thinks that the image is of the particular class. So, let's get the index of the highest energy:

1
2
3
4
5
6
7
outputs = net(images)

_, predicted = torch.max(outputs, 1)

print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
                              for j in range(images.shape[0])))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(images.shape[0])))

Let us look at how the network performs on the whole dataset.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
correct = 0
total = 0
with torch.no_grad():  # torch.no_grad for TESTING
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

That looks way better than chance, which is 10% accuracy (randomly picking a class out of 10 classes). Seems like the network learnt something.

Hmmm, what are the classes that performed well, and the classes that did not perform well:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
    for data in testloader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs, 1)
        c = (predicted == labels).squeeze()
        for i in range(4):
            label = labels[i]
            class_correct[label] += c[i].item()
            class_total[label] += 1


for i in range(10):
    print('Accuracy of %5s : %2d %%' % (
        classes[i], 100 * class_correct[i] / class_total[i]))

In order to have a better intuition about what a 2D-convolutional layer is, we will feed a batch of images into the first convolutional layer, and visualize the result.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
## the first batch is the "images" tensor
print(f"Tensor of the first batch, shape : {images.shape} ")

## We use the first layer of the model to process the input
processed = net.conv1(images)

## We keep only one image of the batch to visualize it
index_img = 7 ## this is between 0 and batch_size - 1 

image = images[index_img]
image = image.unsqueeze(0) ## this is needed to add a singleton dimension to the tensor, so that we can visualize it with make_grid
# here we will add a singleton dimension as if we had a batch of size 1 to visualize, but we are keeping the three channels to keep the colors 


## same thing with the output of the first convolutional layer
processed = processed[index_img]
processed = processed.unsqueeze(1) ## this is needed to add a singleton dimension to the tensor, so that we can visualize it with make_grid
# here, remember that we want to visualize the output of the first convolutional layer, which has 6 channels. We need to add a singleton dimension to the tensor to visualize it with make_grid
# we keep the six feature maps as the "batch size" of the tensor, and we add a singleton dimension as a single channel

# visualize
print("Original image")
imshow(torchvision.utils.make_grid(image.detach()))

print("Outputs of the first convolutional layer")
imshow(torchvision.utils.make_grid(processed.detach(),scale_each=True, normalize=True))
1
2
Tensor of the first batch, shape : torch.Size([8, 3, 32, 32]) 
Original image

png

1
Outputs of the first convolutional layer

png

You can see that the output of the first convolutional layer corresponds to filtered versions of the input. Convolutions can enhance or reduce some local contrast changes, or edges / contours. A convolutional neural network model will generate many different "feature maps" such as this one.

Let's now see the effect of relu and max pooling

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
processed_relumaxpool = F.max_pool2d(F.relu(net.conv1(images)), (2, 2))

## same thing with the output of the first convolutional layer
processed_relumaxpool = processed_relumaxpool[index_img]
processed_relumaxpool = processed_relumaxpool.unsqueeze(1)
# visualize
print("Original image")
imshow(torchvision.utils.make_grid(image.detach()))

print("Outputs of the first convolutional layer")
imshow(torchvision.utils.make_grid(processed.detach(),normalize=True,value_range=(-1,1),padding=0))

## print ranges 
print(f"Range of the original image : [{torch.min(image).item()}, {torch.max(image).item()}]")
print(f"Range of the output of the first convolutional layer : [{torch.min(processed).item()}, {torch.max(processed).item()}]")
print(f"Range of the output of the first convolutional layer after Relu and Max Pool : [{torch.min(processed_relumaxpool).item()}, {torch.max(processed_relumaxpool).item()}]")


print("After Relu and Max Pool")
imshow(torchvision.utils.make_grid(processed_relumaxpool.detach(), normalize=True,value_range=(-1,1),padding=0))
1
Original image

png

1
Outputs of the first convolutional layer

png

1
2
3
4
Range of the original image : [-0.8039215803146362, 0.9764705896377563]
Range of the output of the first convolutional layer : [-2.5888426303863525, 2.6774139404296875]
Range of the output of the first convolutional layer after Relu and Max Pool : [0.0, 3.9081757068634033]
After Relu and Max Pool

png

By alternating 2D convolutional filter, which have the effect of amplifying / decreasing contrasts, edges, etc.. then ReLu which act as a threshold, and max pooling which reduces the resolution while keeping the largest values, many smaller "images" (also called "feature maps")are computed as the model gets deeper.

Okay, so what next?

How do we run these neural networks on the GPU?

Training on GPU

Just like how you transfer a Tensor onto the GPU, you transfer the neural net onto the GPU.

Let's first define our device as the first visible cuda device if we have CUDA available:

1
2
3
4
5
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

# Assuming that we are on a CUDA machine, this should print a CUDA device:

print(device)

The rest of this section assumes that device is a CUDA device.

Then these methods will recursively go over all modules and convert their parameters and buffers to CUDA tensors:

1
    net.to(device)

Remember that you will have to send the inputs and targets at every step to the GPU too:

1
        inputs, labels = data[0].to(device), data[1].to(device)
Why dont I notice MASSIVE speedup compared to CPU? Because your network is really small.

5. Specificities of modalities

Here are links for specificities to deal with your modalities :

This is the end of the Tutorial!

Goals achieved:

  • Understanding PyTorch's Tensor library and neural networks at a high level.
  • Train a small neural network to classify images