Brief Review — Self-Supervised Learning for Medical Image Analysis Using Image Context Restoration
Learning Representation by Context Restoration (CR)
Self-Supervised Learning for Medical Image Analysis Using Image Context Restoration,
Context Restoration, by Imperial College London, Nagoya University, Aichi Cancer Centre, and Nagoya University Hospital
2019 JMIA, Over 200 Citations (Sik-Ho Tsang @ Medium)
Self-Supervised Learning, Medical Image Analysis, Image Classification, Object Detection, Image Segmentation
Outline
- Context Restoration (CR)
- Results
1. Context Restoration (CR)
- Given an image xi, two isolated small patches in xi are randomly selected and swapped. This process is repeated for T times results in ˜xi, as shown above.
A CNN is to be learnt to restore the context.
- In the analysis part, the architecture is similar to that of the VGGNet.
- In the reconstruction part, CNN structures could vary depending on subsequent task type.
- For subsequent classification tasks, the simple structures such as a few deconvolution layers (2nd row) are preferred.
- For subsequent segmentation tasks, a network which is in symmetry with the analysis part using concatenation connections, which is similar to a U-Net.
- L2 loss is used.
2. Results
2.1. 2D Ultrasound Image Classification
- In practice, SonoNet-64 (Baumgartner et al., 2017) is used.
Context restoration pretraining improves the SonoNet performance the most. This suggests that context restoration pretraining is more useful for image classification in this case.
2.2. Abdominal Multi-Organ Localization
- The CNN for multi-organ localization task is similar to the SonoNet (Baumgartner et al., 2017), but it has one more stack of convolution and pooling layers to reduce the output size.
Initialising by pretrained features, particularly those from context restoration tasks, improves the CNN performance.
2.3. Brain Tumor Segmentation
U-Nets initialised by context restoration pretraining achieve the best performance in total.
Reference
[2019 JMIA] [Context Restoration]
Self-Supervised Learning for Medical Image Analysis Using Image Context Restoration
1.2. Unsupervised/Self-Supervised Learning
1993 … 2019 [Context Restoration] … 2021 [MoCo v3] [SimSiam] [DINO] [Exemplar-v1, Exemplar-v2] [MICLe] [Barlow Twins] [MoCo-CXR] [W-MSE] [SimSiam+AL] [BYOL+LP] 2022 [BEiT] [BEiT V2]
1.9. Biomedical Image Classification
2017 … 2019 [Context Restoration] … 2021 [MICLe] [MoCo-CXR] [CheXternal] [CheXtransfer] [Ciga JMEDIA’21]
1.10. Biomedical Image Segmentation
2015 … 2019 [Context Restoration] … 2021 [Ciga JMEDIA’21]