Project: Improving Classification through Self-supervised Learning
Overview
This project is one of the options which you can optin for undertaking summative assessment, carrying 25%
of the total module marks. In this project, you will apply self-supervised learning to improve the classification perf
ormance. In this project, you will: 1) build a classification model; 2) improve performance through pretraining; 3) w
rite a report.
Submission
TBC
You should submit a zip file containing the following:
• Your code;
• A 5-pages report (IEEE double column format) explaining your code, visualising your results you obtained and
discussing your observations. If you want you can include additional pages only for visualisations and references
(this is optional and you won’t lose any marks if you do not include additional pages). However, the main text of
your report should fit in the first 5 pages and the additional pages (if any) should only include visualisations with
short captions and references.
Please note that for all the visualisations and tables that you include in your report, it is important to include a
reference in the main text (typically using a Figure or Table number).
Background
Melanoma Classification
While melanoma is the least common type of skin cancer, it tragically accounts for 75% of deaths from the disease,
making it the deadliest form. Despite this, over 100,000 new cases of melanoma are expected in 2020, with nearly
7,000 people losing their lives to it. Just like with other cancers, early and accurate detection is crucial for effective
treatment. This is where data science could play a vital role, potentially aiding in identifying melanoma early and
improving patient outcomes. You will participate in the Kaggle’s competition “SIIM-ISIC Melanoma Classification”.
In this dataset, there are 33126 records for training, each has a lesion’s image and metadata. The images and metadata
are located in ‘jpeg‘ and ‘train.csv‘ respectively. The metadata includes the diagnosis information, including patients
id, sex, age, the anatomic site, the lesion diagnosis, a target (1 for melanoma and 0 for others), and so on.
Figure 1: Submit your results.
To submit the results, you must train a model based on the training set and predict the probability for each sample
in the test set. The final result will be evaluated on area under the ROC curve through kaggle where you can upload a
‘submission.csv’ or select a notebook (recommended), as shown in Figure 1.
There are many excellent works available in kaggle to help you understand the melanoma classification, such as,
1st-Place-Solution and SIIM-Transformer. Note that, your goal is to understand how self-supervised learning facilitates
classification, thus, you can use one suitable project as your baseline. To speed up the training process, you may use a
lower resolution or fewer training epochs.
1
Self-supervised Learning
Self-supervised learning (SSL) is a technique in machine learning where a model trains itself on tasks using the data
itself to generate supervisory signals, instead of relying on external labels provided by humans. Essentially, the model
creates its own learning objectives from the data it’s given. You are going to use the images in the training set without
labels to get a better starting point. One advantage of SSL is to improve data-efficiency [3], which is a major hinder in
medical problems due to the privacy concern.
Successful self-supervised learning methods include MoCo [6], SiT [1], GMML [2], MAE [5], and so on. Theses
methods can be roughly categorized into two branches: contrastive learning and masked image modeling. You may
use packages to build models quickly through pytoch image models - timm and MAE. You may use the Efficient Selfsupervised Learning Library to train your model from scratch. There are more repos you can refer: SiT and TinyViT.
You may choose one of above methods to pretrain your model on the training set (feel free to use any SSL methods).
Once you pretrained your model to obtain the weights by ‘torch.save’ and ‘model.state dict()’, you will finetune the
model with the classification script at the last section. The only difference is that your model starts from the pretrained
weights instead of random initialization.
Guideline
• Load and preprocess the dataset. Optional: accelerating your data loading through FFCV.
• Determine your backbone network. Popular architectures include ResNet [7] and ViT [4]. Choose proper model
size according to your computational resources.
• Train your model with random initialization to classify melanoma to obtain a baseline.
• Pretrain your model to obtain pretrained weights.
• Finetune your model with the pretrained weights to obtain an improve model.
• Evaluate the baseline model and the improved model on test set to generate ‘submission.csv’ files and submit the
files through Kaggle to get the final scores.
• Tips: Use fewer epochs or lower resolution to accelerate training if you are in short of GPUs.
Discuss your Observations
It is important to describe or explain which project you refer to, what kind of modification you made and why. It is
common that your reproduce is lower than the original one, especially when you have insufficient GPUs. Performance is
not the only criterion, instead, the efforts you present in the report masters more. You need to explain the mechanism
of the SSL method you apply and study the key factor to improve pretraining.
Extra credit
Extra credit will be awarded if one could potentially perform additional tasks related to the main recognition task.
These additional tasks might include but not limited to visualisation, interface design etc.
MARKING CRITERIA
The project will be assessed giving the 50% weight (12.5 marks) to technical report and 30% weight (7.5 marks) to
functionality and 20% weight (5 marks ) to code quality according to the following criteria:
REPORT QUALITY [50 marks]
Whether the results are well presented and discussed.
In particular:
• Is the report well written and clear?
• Is the report well structured?
• Are the figures clear and well explained?
• Does the report provide a clear explanation of what has been done to solve the problem?
• Is there a sufficient discussion regarding observations on the produced results?
The distribution of the marks within the report are as follows:
2
• Abstract: 10 marks,
• Introduction: 10 marks,
• Literature (minimum 5 papers): 15 marks,
• Methodology: 25 marks,
• Experiments: 30 marks,
• Conclusion and future work: 10 marks
FUNCTIONALITY [30 marks]
Whether the submitted program performs as specified.
In particular did the code implement all the steps specified in the previous sections?
CODE QUALITY [20 marks]
Quality and efficiency of the coding, including appropriate use of documentation.
In particular:
• Is the code efficient?
• Is the code extensible and maintainable?
• Is the code clear and well documented?
References
[1] Sara Atito, Muhammad Awais, and Josef Kittler. SiT: Self-supervised vIsion Transformer. ArXiv preprint,
abs/2104.03602, 2021.
[2] Sara Atito, Muhammad Awais, and Josef Kittler. GMML is All you Need, May 2022. arXiv:2205.14986 [cs].
[3] Elijah Cole, Xuan Yang, Kimberly Wilber, Oisin Mac Aodha, and Serge Belongie. When Does Contrastive Visual
Representation Learning Work? In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), pages 01–10, New Orleans, LA, USA, June 2022. IEEE.
[4] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, and others. An image is worth 16x16 words:
Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
[5] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked Autoencoders Are
Scalable Vision Learners. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
pages 15979–15988, New Orleans, LA, USA, June 2022. IEEE.
[6] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum Contrast for Unsupervised
Visual Representation Learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition,
CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9726–9735. IEEE, 2020.
[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016
.请加QQ:99515681 邮箱:99515681@qq.com WX:codinghelp
版权声明
广深在线内容如无特殊说明,内容均来自于用户投稿,如遇版权或内容投诉,请联系我们。