Lab Session 4 - Deep Learning Tutorial⚓
Note
Before you begin, make sure you have downloaded the latest update of the course slides from here, and keep them close while doing the lab.
Warning
We will work with pytorch. If you don't have a GPU (which is very likely to be the case if you use a laptop), we recommend installing pytorch in "cpu only" mode as it will be much smaller to download. See pytorch installation instructions and select "cpu only" as compute platform.
If you run this notebook on Google Colab, you'll have access to a GPU.
Objective of the lab⚓
This session is a Deep Learning tutorial in Pytorch.
PyTorch is a Python-based scientific computing package serving two broad purposes:
- A replacement for NumPy to use the power of GPUs and other accelerators.
- An automatic differentiation library that is useful to implement Deep Learning architectures.
Note
PyTorch is one of the standard librairies to define neural networks. This tutorial is loosely based on the 60 min blitz Deep Learning with Pytorch but with many original parts.
The tutorial is structured as follows:
- Tensors in Pytorch
- Understanding the training loop and automatic differentiation
- Defining a Deep Learning Architecture
- Training a Classifier on CIFAR10, a standard image classification dataset
- Study specificities of Text, Audio and Image modalities
Note
Copy / modify / playaround with the code snippets that we provide. In part 4, you are expected to complete some empty cells to successfully train and test your net
.
1. Tensors⚓
Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters.
Tensors are similar to NumPy’s ndarrays, except that tensors can run on GPUs or other specialized hardware to accelerate computing. If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along in this quick API walkthrough.
1 2 |
|
Tensor Initialization⚓
Tensors can be initialized in various ways. Take a look at the following examples:
Directly from data
Tensors can be created directly from data. The data type is automatically inferred.
1 2 |
|
From a NumPy array
Tensors can be created from NumPy arrays (and vice versa).
1 2 |
|
From another tensor:
The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.
1 2 3 4 5 |
|
With random or constant values:
shape is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.
1 2 3 4 5 6 7 8 |
|
Tensor Attributes⚓
Tensor attributes describe their shape, datatype, and the device on which they are stored.
1 2 3 4 5 |
|
Tensor Operations⚓
Over 100 tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random sampling, and more are comprehensively described here.
Try out some of the operations from the list. If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.
Standard numpy-like indexing and slicing:
1 2 3 |
|
Joining tensors You can use torch.cat to concatenate a sequence of tensors along a given dimension. See also torch.stack, another tensor joining operation that is subtly different from torch.cat.
1 2 |
|
Multiplying tensors
1 2 3 4 |
|
This computes the matrix multiplication between two tensors
1 2 3 |
|
Bridge to NumPy
Tensor to NumPy array
1 2 3 4 |
|
NumPy array to Tensor
1 2 |
|
Changes in the NumPy array reflects in the tensor.
1 2 3 |
|
2. Understanding the training loop and automatic differentation with torch.autograd
⚓
torch.autograd
is PyTorch’s automatic differentiation engine that powers
Deep Learning training. In this section, you will get a conceptual
understanding of how autograd is used to train a deep learning architecture.
Background⚓
Neural networks (NNs) are basic building blocks of deep learning models, and are implemented in pytorch as a collection of nested functions that are executed on some input data. These functions are defined by parameters (consisting of weights and biases), which in PyTorch are stored in tensors.
Training happens in two steps:
Forward Propagation: In forward prop, the deep learning model is applied to the input data to generate an output . It runs the input data through each of its functions to generate the output.
Backward Propagation: In backprop, the model parameters are adjusted proportionately to the error, i.e. the discrepancy between the obtained output and the ground truth. It does this by traversing backwards from the output, collecting the derivatives of the error with respect to the parameters of the functions (gradients), and optimizing the parameters using gradient descent. For a more detailed walkthrough of backprop, check out this [video from 3Blue1Brown] (https://www.youtube.com/watch?v=tIeHLnjs5U8).
Usage in PyTorch⚓
Let's take a look at a single training step.
For this example, we load a pretrained resnet18 model from torchvision
.
We create a random data tensor to represent a single image with 3 channels, and height & width of 224,
and its corresponding label
initialized to some random values.
1 2 3 4 |
|
Next, we run the input data through the model through each of its layers to make a prediction. This is the forward pass.
1 |
|
We use the model's prediction and the corresponding label to calculate the error (loss).
N.B. : As an example here, the loss is simply defined as the difference between the prediction and labels, but in practice, other loss functions are used such as the Cross Entropy Loss for classification problems, or the Mean Square Error for regression problems.
The next step is to backpropagate this error through the network.
Backward propagation is kicked off when we call .backward()
on the error tensor.
Autograd then calculates and stores the gradients for each model parameter in the parameter's .grad
attribute.
1 2 |
|
Next, we load an optimizer, in this case stochastic gradient descent SGD with a learning rate of 0.01 and momentum of 0.9. We register all the parameters of the model in the optimizer.
Finally, we call .step()
to initiate gradient descent. The optimizer adjusts each parameter by its gradient stored in .grad
.
1 2 |
|
Differentiation in Autograd⚓
Let's take a look at how autograd
collects gradients. We create two tensors a
and b
with
requires_grad=True
. This signals to autograd
that every operation on them should be tracked.
1 2 3 4 |
|
We create another tensor Q
from a
and b
.
1 |
|
Let's assume a
and b
to be parameters of a model, and Q
to be the error. During training, we want gradients of the error
w.r.t. parameters, i.e.
When we call .backward()
on Q
, autograd calculates these gradients
and stores them in the respective tensors' .grad
attribute.
We need to explicitly pass a gradient
argument in Q.backward()
because it is a vector.
gradient
is a tensor of the same shape as Q
, and it represents the
gradient of Q w.r.t. itself, i.e.
1 2 3 4 5 6 7 8 |
|
Computational Graph⚓
Conceptually, autograd keeps a record of data (tensors) & all executed operations (along with the resulting new tensors) in a directed acyclic graph (DAG) consisting of Function objects. In this DAG, leaves are the input tensors, roots are the output tensors. By tracing this graph from roots to leaves, you can automatically compute the gradients using the chain rule.
In a forward pass, autograd does two things simultaneously:
- run the requested operation to compute a resulting tensor, and
- maintain the operation’s gradient function in the DAG.
The backward pass kicks off when .backward()
is called on the DAG
root. autograd
then:
- computes the gradients from each
.grad_fn
, - accumulates them in the respective tensor’s
.grad
attribute, and - using the chain rule, propagates all the way to the leaf tensors.
Note
DAGs are dynamic in PyTorch**
An important thing to note is that the graph is recreated from scratch; after each
.backward()
call, autograd starts populating a new graph. This is
exactly what allows you to use control flow statements in your model;
you can change the shape, size and operations at every iteration if
needed.
Exclusion from the DAG⚓
torch.autograd
tracks operations on all tensors which have their
requires_grad
flag set to True
. For tensors that don’t require
gradients, setting this attribute to False
excludes it from the
gradient computation DAG.
The output tensor of an operation will require gradients even if only a
single input tensor has requires_grad=True
.
1 2 3 4 5 6 7 8 |
|
In a NN, parameters that don't compute gradients are usually called frozen parameters. It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters (this offers some performance benefits by reducing autograd computations).
Another common usecase where exclusion from the DAG is important is for transfer learning (see finetuning a pretrained network)
In transfer learning, we freeze most of the model and typically only modify the classifier layers (last fully connected layers) to make predictions on new labels. Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.
1 2 3 4 5 6 7 |
|
Let's say we want to finetune the model on a new dataset with 10 labels.
In resnet, the classifier is the last linear layer model.fc
.
We can simply replace it with a new linear layer (unfrozen by default)
that acts as our classifier.
1 |
|
Now all parameters in the model, except the parameters of model.fc
, are frozen.
The only parameters that compute gradients are the weights and bias of model.fc
.
This means also that during traininig, the only parameters that are computing gradients (and hence updated in gradient descent)
are the weights and bias of the classifier (model.fc
).
NOTE:⚓
The same exclusionary functionality is available as a context manager in
torch.no_grad()
, that can be used for Transfer Learning (i.e. keeping parts of a model fixed while training another model).
3. Defining a Deep Learning Model⚓
A deep learning model can be constructed using the modules from the torch.nn
package.
Now that you had a glimpse of autograd
, nn
depends on
autograd
to define models and differentiate them.
An nn.Module
contains layers, and a method forward(input)
that
returns the output
.
A deep learning model takes the input, feeds it through several layers one after the other, and then finally gives the output.
A typical training procedure for a deep learning model is as follows:
- Define the model that has some learnable parameters (or weights)
- Iterate over a dataset of inputs
- Process input through the model
- Compute the loss (how far is the output from being correct)
- Propagate gradients back into the model’s parameters
- Update the weights of the model, typically using a simple update rule:
Define the model⚓
Let’s define a simple deep learning model :
- take as input a greyscale image (1 input channel),
- processes it with 2 layers of 2D convolutional filters (Conv2d), each followed by ReLu and 2D max pooling,
- followed by a 3 layer perceptron, which is composed of Linear units and ReLu.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
|
The forward
function of the model class is the one that implements the forward pass, which is the sequence of operations from input to output. It is possible to use all operators from nn.functional
and modules defined in nn
, as well as operations on tensors.
The backward
function (where gradients are computed) is automatically defined for you
using autograd
.
The learnable parameters of a model are returned by net.parameters()
1 2 3 |
|
Let's try a random 32x32 input. Note: expected input size of this net is 32x32.
1 2 3 4 |
|
In order to better understand the inner operations of the model, let's break down the forward pass layer by layer, and print the successive shapes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
Try changing the size (32x32) of the image input and see what happens !
1 |
|
Pytorch has a method to automatically put all the gradients to zero, which is necessary before a forward pass, in order to initialize the gradients.
1 |
|
NOTE:⚓
torch.nn
only supports mini-batches. The entire torch.nn
package only supports inputs that are a mini-batch of samples, and not
a single sample.
For example, nn.Conv2d
will take in a 4D Tensor of nSamples x nChannels x Height x Width
.
If you have a single sample, just use input.unsqueeze(0)
to add a fake batch dimension.
Before proceeding further, let's recap all the classes you’ve seen so far.
Recap:
torch.Tensor
- A multi-dimensional array with support for autograd operations likebackward()
. Also holds the gradient w.r.t. the tensor.nn.Module
- Neural network module, basic blocks for defining a model. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.nn.Parameter
- A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to aModule
.autograd.Function
- Implements forward and backward definitions of an autograd operation. EveryTensor
operation creates at least a singleFunction
node that connects to functions that created aTensor
and encodes its history.
At this point, we covered:
- Defining a model
- Processing inputs and calling backward
Still Left:
- Computing the loss
- Updating the weights of the network
Loss Function⚓
A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target.
There are several different
[loss functions] (https://pytorch.org/docs/nn.html#loss-functions) under the
nn package.
A simple loss is: nn.MSELoss
which computes the mean-squared error
between the input and the target. This loss is adapted for regression problems when the targets are continuous.
For example:
1 2 3 4 5 6 7 |
|
Now, if you follow loss
in the backward direction, using its
.grad_fn
attribute, you will see a graph of computations that looks
like this:
::
1 2 3 4 |
|
So, when we call loss.backward()
, the whole graph is differentiated
w.r.t. the loss, and all Tensors in the graph that has requires_grad=True
will have their .grad
Tensor accumulated with the gradient.
For illustration, let us follow a few steps backward:
1 2 3 |
|
Backprop⚓
To backpropagate the error all we have to do is to loss.backward()
.
You need to clear the existing gradients though, else gradients will be
accumulated to existing gradients.
Now we shall call loss.backward()
, and have a look at conv1's bias
gradients before and after the backward.
1 2 3 4 5 6 7 8 9 |
|
In practice, you will rarely need to have a look at the gradients when training a deep model, but it's good to know how to do it.
In most applications, defining the loss and performing the backward propagation process using loss.backward()
will be sufficient.
Now, we have seen how to use loss functions.
Read Later:
The neural network package contains various modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is here.
The only thing left to learn is:
- Updating the weights of the network
Update the weights
The simplest update rule used in practice is the Stochastic Gradient Descent (SGD):
We could implement this using a (pseudo) Python code such as this one :
1 2 3 |
|
However, there are various update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc.
To enable this, pytorch implements updates rules in "optimizers" in the torch.optim
module. Using it is very simple:
1 2 3 4 5 6 7 8 9 10 11 |
|
You have seen how to define a deep learning model, compute loss and make updates to the weights of the network. One thing is still missing:
Input Data⚓
Generally, when you have to deal with image, text, audio or video data,
you can use standard python packages that load data into a numpy array.
Then you can convert this array into a torch.Tensor
.
- For images, packages such as Pillow (PIL), OpenCV are useful
- For audio, packages such as scipy and librosa or torchaudio
- For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful
Specifically for vision, pytorch has created a package called
torchvision, that has data loaders for common datasets such as
Imagenet, CIFAR10, MNIST, etc. and data transformers for images,
torchvision.datasets
and torch.utils.data.DataLoader
.
There is a similar package for audio, which is called torchaudio.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
This is it. You are finally ready to:
4. Train a Classifier!⚓
We will do the following steps in order:
a. Load and normalizing the CIFAR10 training and test datasets using
torchvision
b. Define a Convolutional Neural Network
c. Define a loss function
d. Train the network on the training data
e. Test the network on the test data
a. Loading and normalizing CIFAR10⚓
Using torchvision
, it’s extremely easy to load CIFAR10.
1 2 3 |
|
The output of torchvision datasets are PILImage images of range [0, 1]
.
We transform them to Tensors of normalized range [-1, 1]
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
NOTE:⚓
If running on Windows you get a BrokenPipeError, try setting
the num_worker of torch.utils.data.DataLoader()
to 0.
Let us show some of the training images, for fun.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
1 |
|
b. Define a Convolutional Neural Network⚓
Copy the neural network from the Neural Networks section before and modify it to take 3-channel (color) images, instead of 1-channel (black and white) images as it was defined.
NOTE:⚓
Pay attention to the in/out features dimensions, especially at the transition between a Convolution (Conv) and Fully connected (fc) linear layer
Recap:
Formula to get the output features size h_out
of a 2D Conv layer given h_in
(size features input), k
(convolutional kernel size), p
(zero padding), and s
(stride) (more details in Conv2D documentation)
Remember that the feature size is divided by the MaxPool2d kernel size when passing through a 2D max pooling layer!
1 |
|
c. Define a Loss function and optimizer⚓
Let's use a Classification Cross-Entropy loss and SGD with momentum.
1 |
|
d. Train the network⚓
This is when things start to get interesting.
We simply have to loop over our data iterator, and feed the inputs to the
network and optimize. In this tutorial we will consider a small number of iterations over the dataset n_epochs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
Let's quickly save our trained model (see [here] (https://pytorch.org/docs/stable/notes/serialization.html) for more details on saving PyTorch models).
1 2 |
|
e. Test the network on the test data⚓
We have trained the network for n_epochs
passes over the training dataset.
But we need to check if the network has learnt anything at all.
We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
1 2 3 4 5 |
|
Next, let's load back in our saved model (note: saving and re-loading the model wasn't necessary here, we only did it to illustrate how to do so):
1 2 |
|
Okay, now let us see what the neural network thinks these examples above are.
The outputs are energies for the 10 classes. The higher the energy for a class, the more the network thinks that the image is of the particular class. So, let's get the index of the highest energy:
1 2 3 4 5 6 7 |
|
Let us look at how the network performs on the whole dataset.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
That looks way better than chance, which is 10% accuracy (randomly picking a class out of 10 classes). Seems like the network learnt something.
Hmmm, what are the classes that performed well, and the classes that did not perform well:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
In order to have a better intuition about what a 2D-convolutional layer is, we will feed a batch of images into the first convolutional layer, and visualize the result.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
1 2 |
|
1 |
|
You can see that the output of the first convolutional layer corresponds to filtered versions of the input. Convolutions can enhance or reduce some local contrast changes, or edges / contours. A convolutional neural network model will generate many different "feature maps" such as this one.
Let's now see the effect of relu and max pooling
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
1 |
|
1 |
|
1 2 3 4 |
|
By alternating 2D convolutional filter, which have the effect of amplifying / decreasing contrasts, edges, etc.. then ReLu which act as a threshold, and max pooling which reduces the resolution while keeping the largest values, many smaller "images" (also called "feature maps")are computed as the model gets deeper.
Okay, so what next?
How do we run these neural networks on the GPU?
Training on GPU⚓
Just like how you transfer a Tensor onto the GPU, you transfer the neural net onto the GPU.
Let's first define our device as the first visible cuda device if we have CUDA available:
1 2 3 4 5 |
|
The rest of this section assumes that device
is a CUDA device.
Then these methods will recursively go over all modules and convert their parameters and buffers to CUDA tensors:
1 |
|
Remember that you will have to send the inputs and targets at every step to the GPU too:
1 |
|
5. Specificities of modalities⚓
Here are links for specificities to deal with your modalities :
- Text : An introduction to Tokenization
- Audio : An introduction to dealing with audio data
- Image : Preprocessing for Computer Vision
This is the end of the Tutorial!⚓
Goals achieved:
- Understanding PyTorch's Tensor library and neural networks at a high level.
- Train a small neural network to classify images