Review — A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification
Fine-Grained Classification for Semi-Supervised Learning Realistic Evaluation
A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification, Su CVPR’21, by University of Massachusetts Amherst
2021 CVPR, over 10 Citations (Sik-Ho Tsang @ Medium)
Semi-Supervised Learning, Image Classification, Fine-Grained Image Classification, Pseudo Label
- Class distribution can be highly unbalanced or even unknown, and the unlabeled data may contain novel classes. How effective is SSL in these situations?
- Is out-of-domain data beneficial when experts are available?
- In this paper, rather than proposing a novel approach, a realistic benchmark is applied onto several semi-supervised learning (SSL) methods using fine-grained classification datasets where the datasets exhibit considerable class imbalance and contains images from novel classes.
Outline
- Realistic Datasets Using Fine-Grained Classification Datasets
- Semi-Supervised Learning Approaches for Evaluations
- Experimental Results & Analysis
1. Realistic Datasets Using Fine-Grained Classification Datasets
- Two fine-grained classification datasets are used. They are obtained by sampling classes under the Aves (birds) and Fungi taxonomy. The out-of-class images are other Aves (or Fungi) images not belonging to the classes within the labeled set.
- They are datasets formed from FGVC7 workshop and FGVC Fungi Challenge, and here called Semi-Aves and Semi-Fungi.
- Each represents a 200-way classification task and the training set contains:
- labeled images from these classes Lin.
- unlabeled images from these classes Uin, and
- unlabeled images from related classes Uout. (out-of-class)
- Moreover, the classes exhibit a long-tailed distribution with an imbalance ratio of 8 to 10.
Compared with other datasets such as CIFAR and SVHN, Semi-Aves and Semi-Fungi present a challenge due to the large number of classes, presence of novel images in the unlabeled set, long-tailed distribution of classes as indicated by the class imbalance ratio.
2. Semi-Supervised Learning Approaches for Evaluations
- Pseudo-Label, Curriculum Pseudo-Label, FixMatch, self-training using Distillation, self-supervised learning using MoCo, as well as their combinations are considered.
- Pseudo-Label trains a model using labeled data and assigns labels onto unlabeled data.
- Curriculum Pseudo-Label is similar to Pseudo-Label, but with iterative training process to re-train the model from scratch for every iteration.
- Self-training using Distillation, means the teacher model is trained using labeled data, and the student model is trained by the teacher using both labeled and unlabeled data.
- FixMatch combines Pseudo-Labeling and consistency regularization.
- MoCo learns the image representation without using labels, which is a self-supervised learning approach, not a semi-supervised learning approach.
- MoCo+Self-Training is also considered where the teacher is first pretrained using MoCo.
- (Please feel free to read their stories if interested.)
3. Experimental Results & Analysis
- ResNet-50 with 224×224 images are used for all experiments. Hyperparameters of all models are tuned individually for each approach.
- For transfer learning, pre-trained models on ImageNet and iNaturalist 2018 (iNat), are used.
- The above two tables show the accuracy for two datasets. To better visualize the results, the relative gain of each SSL method, i.e. the differences between supervised baseline in raw accuracy, is shown below:
3.1. Training From Scratch Using only Uin
- Comparing to supervised baseline, Curriculum Pseudo-Label does not give improvements and Pseudo-Label even underperforms the baseline. This is possibly due to the low initial accuracy of the model which gets amplified during pseudo labeling.
- FixMatch and Self-Training both result in improvements.
- Self-supervised learning (MoCo) gives a good initialization and the improvements are similar or even more than using FixMatch.
- Finally, Self-Training using MoCo pre-trained model as the teacher model results in a further 2–3% improvement.
3.2. Using Expert (Pretrained) Models Using only Uin
- ImageNet or iNat pre-trained model for transfer learning with Uin only, are considered.
- Most of the SSL methods, as well as MoCo pre-training, provide improvements over the baselines.
- The only exception is Pseudo-Label on Semi-Fungi. Among SSL methods, FixMatch and MoCo+Self-Training perform the best.
3.3. Effect of out-of-class unlabeled data (Uin+Uout)
- When considering having Uin+Uout with expert models, it is found that the performance often drops in the presence of Uout.
- Curriculum Pseudo-Label and Self-Training are more robust and yield less than 1% decrease in most cases.
- FixMatch is less robust whose performance drops by around 6%.
- The performances of MoCo also drops around 1–3% and are sometimes worse than the supervised baseline.
- Adding Self-Training however provides a 1–3% boost in performance.
Overall, Self-Training from either a supervised or a self-supervised model is the most robust one.
3.4. Analysis
- Overall, the model is generally more uncertain about the out-of-class data, which often has a higher entropy or a smaller maximum probability.
- The distillation loss on Uin is also often higher than that of Uout, suggesting the model focuses more on those from Uin during training.
- However, there is still a good amount of data from Uout having a high maximum probability, which has a negative impact for pseudo-label methods.
Authors hope that by proposing the benchmarks and results, this paper can lead to some new innovations in semi-supervised learning.
Reference
[2021 CVPR] [Su CVPR’21]
A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification
Pretraining or Weakly/Semi-Supervised Learning
2004 … 2019 [VAT] [Billion-Scale] [Label Propagation] [Rethinking ImageNet Pre-training] [MixMatch] [SWA & Fast SWA] [S⁴L] 2020 [BiT] [Noisy Student] [SimCLRv2] [UDA] [ReMixMatch] [FixMatch] 2021 [Curriculum Labeling (CL)] [Su CVPR’21]