Reading: Cutout — Improved Regularization of Convolutional Neural Networks (Image Classification)
In this story, Improved Regularization of Convolutional Neural Networks with Cutout (Cutout), by University of Guelph, and Canadian Institute for Advanced Research and Vector Institute, is shortly presented.
- Due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well.
- In this paper, simple regularization technique of randomly masking out square regions of input during training is proposed, which is called cutout, can be used to improve the robustness and overall performance.
This is a paper in 2017 arXiv with over 500 citations. (Sik-Ho Tsang @ Medium)
- Differences from Dropout
- Experimental Results
- The main motivation for cutout comes from the problem of object occlusion, which is commonly encountered in many computer vision tasks, such as object recognition, tracking, or human pose estimation.
- By generating new images which simulate occluded examples, we not only better prepare the model for encounters with occlusions in the real world, but the model also learns to take more of the image context into consideration when making decisions.
This technique encourages the network to better utilize the full context of the image, rather than relying on the presence of a small set of specific visual features.
- This method, cutout, can be interpreted as applying a spatial prior to dropout in input space, much in the same way that convolutional neural networks leverage information about spatial structure in order to improve performance over that of feed-forward networks.
2. Differences from Dropout
- With considering applying noise in a similar fashion to dropout, there are two important distinctions:
- The first difference is that units are dropped out only at the input layer of a CNN, rather than in the intermediate feature layers.
- The second difference is that we drop out contiguous sections of inputs rather than individual pixels.
3. Experimental Results
3.1. CIFAR10, CIFAR100, SHVN
- The above figures depict the grid searches conducted on CIFAR-10 and CIFAR-100 respectively.
- Based on these validation results we select a cutout size of 16×16 pixels to use on CIFAR-10 and a cutout size of 8×8 pixels for CIFAR-100 when training on the full datasets.
- Similarly, for SHVN, to find the optimal size for the cutout region we conduct a grid search using 10% of the training set for validation and ultimately select a cutout size of 20×20 pixels.
- Cutout yields these performance improvements even when applied to complex models that already utilize batch normalization, dropout, and data augmentation.
- Cutout improves ResNet, WRN, and Shake-Shake.
- Adding cutout to the current state-of-the-art Shake-Shake regularization models improves performance by 0.3 and 0.6 percentage points on CIFAR-10 and CIFAR-100 respectively, yielding new state-of- the-art results of 2.56% and 15.20% test error.
- WRN-16–8 plus cutoff, an average reduction in test error of 0.3 percentage points is observed, resulting in a new state-of-the-art performance of 1.30% test error.
- While the main purpose of the STL-10 dataset is to test semi-supervised learning algorithms, authors use it to observe how cutout performs when applied to higher resolution images in a low data setting.
- For this reason, the unlabeled portion of the dataset is discarded and only the labeled training set is used.
- A grid search is performed over the cutout size parameter using 10% of the training images as a validation set and select a square size of 24×24 pixels for the no data augmentation case and 32×32 pixels for training STL-10 with data augmentation.
- Training the model using these values yields a reduction in test error of 2.7 percentage points in the no data augmentation case, and 1.5 percentage points when also using data augmentation.
3.3. Analysis of Cutout’s Effect on Activations
- The shallow layers of the network experience a general increase in activation strength, while in deeper layers, we see more activations in the tail end of the distribution.
- The latter observation illustrates that cutout is indeed encouraging the network to take into account a wider variety of features when making predictions, rather than relying on the presence of a smaller number of features.
[2017 arXiv] [Cutout]
Improved Regularization of Convolutional Neural Networks with Cutout
[LeNet] [AlexNet] [Maxout] [NIN] [ZFNet] [VGGNet] [Highway] [SPPNet] [PReLU-Net] [STN] [DeepImage] [SqueezeNet] [GoogLeNet / Inception-v1] [BN-Inception / Inception-v2] [Inception-v3] [Inception-v4] [Xception] [MobileNetV1] [ResNet] [Pre-Activation ResNet] [RiR] [RoR] [Stochastic Depth] [WRN] [ResNet-38] [Shake-Shake] [Cutout] [FractalNet] [Trimps-Soushen] [PolyNet] [ResNeXt] [DenseNet] [PyramidNet] [DRN] [DPN] [Residual Attention Network] [DMRNet / DFN-MR] [IGCNet / IGCV1] [Deep Roots] [MSDNet] [ShuffleNet V1] [SENet] [NASNet] [MobileNetV2] [CondenseNet] [IGCV2] [IGCV3] [FishNet] [SqueezeNext] [PNASNet] [AmoebaNet]