Review — MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models
MoCo-CXR, MoCo v2 With Proper Data Augmentation
MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models, MoCo-CXR, by Stanford University
2021 MIDL, Over 40 Citations (Sik-Ho Tsang @ Medium)
Self-Supervised Learning, MoCo, Image Classification, Medical Image Classification
- MoCo-CXR is proposed, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo v2), to produce models with better representations and initializations for the detection of pathologies in chest X-rays.
Outline
- Motivations
- MoCo-CXR
- Experimental Results
1. Motivations
1.1. Limited Usage of Self-Supervised Learning on Medical Images
- Contrastive learning application to medical imaging settings is limited.
1.2. X-Ray Images Differ From Natural Images
- Chest X-ray interpretation is fundamentally different from natural image classification:
- Disease classification may depend on abnormalities in a small number of pixels,
- X-rays data are larger in size, grayscale and have similar spatial structures across images,
- there are far fewer (unlabeled) chest X-ray images than natural images
1.3. Data Augmentation in MoCo
- Random crops and blurring may eliminate disease-covering parts from an augmented image.
- Color jittering and random gray scale would not produce meaningful transformations for already grayscale images.
2. MoCo-CXR
- MoCo-CXR target to generate views suitable for the chest X-ray interpretation task.
- Specifically, Random rotation (10 degrees) and horizontal flipping are used.
- Two backbones: ResNet18 and DenseNet121.
- The training pipeline is the same as those on ImageNet, but with the consideration of label fractions which is quite a specific conditions in medical images.
3. Experimental Results
3.1. MoCo-CXR vs ImageNet-Pretrained
- AUC on pleural effusion task for linear models with MoCo-CXR-pretraining is consistently higher than AUC of linear models with ImageNet-pretraining.
MoCo-CXR-pretraining produces higher quality representations than ImageNet-pretraining does.
- MoCo-CXR-pretrained models outperform their ImageNet-pretrained counterparts more at small label fractions than at larger label fractions.
3.2. Transfer to External ShenZhen Dataset
- MoCo-CXR pretraining still introduces significant improvement despite being fine-tuned on an external dataset.
Unsupervised pretraining pushes the model towards solutions with better generalization to tasks that are in the same domain.
Reference
[2021 MIDL] [MoCo-CXR]
MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models
Self-Supervised Learning
1993 … 2020 [CMC] [MoCo] [CPCv2] [PIRL] [SimCLR] [MoCo v2] [iGPT] [BoWNet] [BYOL] [SimCLRv2] [BYOL+GN+WS] 2021 [MoCo v3] [SimSiam] [DINO] [Exemplar-v1, Exemplar-v2] [MICLe] [Barlow Twins] [MoCo-CXR]
Biomedical Image Classification
2017 [ChestX-ray8] 2019 [CheXpert] 2020 [VGGNet for COVID-19] [Dermatology] 2021 [MICLe] [MoCo-CXR]