Ressources for the Vision Application⚓
This application is based on iNaturalist, from which we extract subdatasets specifically for this course.
Ressources for Session 5⚓
First, you should load the images extracted from iNaturalist for the 7 classes of Lab 2 and 3. Please download it following this link.
!!!warning We will work with the torch
library that you should already know from Lab 4, and we also need the transformers
library from hugging face and PIL
which is useful to process images. A couple of other libraries can be useful for downloading the foundation model. Simply install them with pip.
1 2 3 4 5 |
|
!!warning The pretrained OpenCLIP model is 4GB, so ensure you have enough space in your cache directory. You can specify a different download location by setting the cache_dir=''
argument in the create_model_and_transforms
function.
1 2 3 4 5 6 7 8 9 10 |
|
You have now to make the necessary imports and build the dataloader from the dataset path
(it turns to be quite easy using the torchvision function datasets
)
1 2 3 4 5 6 7 8 9 10 11 |
|
Find the train and test size using len(dataset
and setting the test set equal to the 20% of it. Then use random_split
to generate the train and test datasets.
1 |
|
1 2 3 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Generate embeddings for the train and test set using the function generate_embeddings
.
Now using the same approach as for Lab 2 check that you have similar classification performances on the embeddings. You can also use an unsupervised approach (UMAP, tSNE) to visualize them.
Ressources for Session 3⚓
Use the same data as for Session 2 in order to perform unsupervised learning. If you use clustering, you can use the labels to evaluate the quality of your clustering.
Because this is unsupervised, you should used both X_train and X_test in your model, in a large X concatenating both. If you want to compare results between the supervised model of Lab 2 and the Unsupervised model of Lab 3 though, don't forget to compare on the same data! (i.e. the test dataset)
Results can be estimated using several metrics (see the sklearn documentation for details):
- Using labels, you can use the random index, homogeneity, completeness and v-measure.
- Without labels, you may use the inertia and the silhouette score.
It is generally informative to use both these metrics.
In the end, using the KMeans algorithm on all the embeddings with \(K=6\) results in:
- inertia: 193.07
- random index: 0.86
You should try varying \(K\) and see what happens!
Ressources for Session 2⚓
Dataset⚓
Main features :
- Seven classes were sampled from the iNaturalist taxonomy.
- There are 100 samples for each class, split into approx 80 for train and 20 for test
- Class names can be fetched from the embedding file (see below), and you can get examples of images on the iNaturalist website
Latent Space⚓
As for Lab 1, the images have been put in a latent space using the vision encoder VIT-H/14 from OpenClip, a deep learning model from this paper. We will delve into the details of Deep Learning and feature extraction from course 4.
For now, you can just open the numpy array containing all samples in the latent space from the embeddings-cv-lab2.npz. This file is a dictionary, whose values are indexed as "X_train", "y_train", "X_test", and "y_test".
In that regard, files are to be loaded using the following snipet code:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Work to do⚓
Compute the classification on this data, using the technique you chosed. Please, refer to Lab Session 2 main page for details.
As an example, here are the results obtained using the K-Nearest Neighbour algorithm with K=10 :
Precision | Recall | F1-score | |
---|---|---|---|
Eriogonum | 0.70 | 0.88 | 0.78 |
Rubus | 0.82 | 0.78 | 0.80 |
Quercus | 0.74 | 1.00 | 0.85 |
Ericales | 0.80 | 0.63 | 0.71 |
Lamioideae | 0.95 | 0.87 | 0.91 |
Ranunculeae | 0.62 | 0.95 | 0.75 |
Ranunculaceae | 0.67 | 0.20 | 0.31 |
You should be able to replicate these results using the function classification_report
from scikit-learn:
1 2 |
|
Ressources for Session 1⚓
Dataset⚓
Main features :
- 200 images
- On the 200 images, 100 are insects and the 100 are plants.
Visualisation of a few examples⚓
Plants
Insects
Latent Space⚓
The 200 images have been put in a latent space using the vision encoder VIT-H/14 from OpenClip, a deep learning model from this paper. We will delve into the details of Deep Learning and feature extraction from course 4.
For now, you can just open the numpy array containing all samples in the latent space from the embeddings-cv-lab1.npz.
Work to do⚓
Compute, visualize and interpret the distance matrix, as explained in Lab Session 1 main page.