An evaluation of self-supervised pre-training for skin-lesion analysis

Abstract

Self-supervised pre-training appears as an advantageous alternative to supervised pre-trained for transfer learning. By synthesizing annotations on pretext tasks, self-supervision allows pre-training models on large amounts of pseudo-labels before fine-tuning them on the target task. In this work, we assess self-supervision for diagnosing skin lesions, comparing three self-supervised pipelines to a challenging supervised baseline, on five test datasets comprising in- and out-of-distribution samples. Our results show that self-supervision is competitive both in improving accuracies and in reducing the variability of outcomes. Self-supervision proves particularly useful for low training data scenarios (<1500 and <150 samples), where its ability to stabilize the outcomes is essential to provide sound results.

Publication
In: ISIC Skin Image Analysis Workshop at ECCV’22
Date