Review — SFA & SFGN: Simplified-Fast-GoogleNet (Blur Classification)
Blur Classification Using Ensemble of Simplified-Fast-GoogleNet (SFGA) and Simplified-Fast-AlexNet (SFA)
In this story, Blur image identification with ensemble convolution neural networks, (SFA & SFGN), by Beihang University and University of Connecticut, is reviewed.
Blur image type classification is essential to blur image recovery.
In this paper:
- Simplified-Fast-AlexNet (SFA) and Simplified-Fast-GoogleNet (SFGN), are designed.
- Ensemble of SFA and SFGN is used for blur classification (Gaussian blur, motion blur, defocus blur and haze blur).
This is a paper in 2019 JSP with high impact factor of 4.662. This paper is an extension of SFA in 2017 IST. (Sik-Ho Tsang @ Medium)
- Simplified-Fast-AlexNet (SFA): Network Architecture
- Simplified-Fast-GoogleNet (SFGN): Network Architecture
- Ensemble of SFA and SFGN
- Overall Framework
- Experimental Results
1. Simplified-Fast-AlexNet (SFA): Network Architecture
- The architecture is similar to the SFA in 2017 IST.
- Except that, the ReLU is changed to Leaky ReLU (LReLU).
- (If interested, please feel free to read SFA in 2017 IST.)
3. Ensemble of SFA and SFGN
- The classification accuracies of SFA and SFGN are denoted as C1 and C2, respectively.
- The corresponding weights of SFA and SFGN are defined as Weight1 = C1/(C1 + C2) and Weight2 = C2/(C1 + C2), respectively.
4. Overall Framework
- For an image that is locally blurred, a number of patches, each being globally blurred, are extracted from the original image and are classified by weighted SFA and SFGN. The overall blur type of the original image is then determined based on the output of the ensemble classifier.
- The improved SLIC super-pixel segmentation method is used to extract blurred area from the blurred images to form a real blurred image dataset containing only global blurred images.
- In brief, the original SLIC considers color and spatial distance to obtain superpixels.
- The modified SLIC method also considers the blur feature distance.
- The information entropy and SVD ratio are also considered to select the purely blur image patches.
- (If interested, please feel free to read the paper directly.)
5.1. Training Dataset
- Similar to SFA, Gaussian blur, motion blur and defocus blur are synthesized. But in this paper, haze blur is also synthesized.
- 200,000 128×128×3 simulated global blur patches are used for training.
- 62,000 real/natural blur patches are obtained from online website.
- All four blur types are uniformly distributed.
5.2. Testing Dataset 1
- Berkeley dataset images and Pascal VOC 2007 dataset are selected to be the testing dataset.
- In total 21,000 global blur test sample patches are obtained in which 5,560 haze blur image patches possess the same sources with training samples.
5.3. Testing Dataset 2
- A dataset consisting of 13,810 natural global blur image patches is constructed. The samples are all collected from the same websites as the haze blur samples in Training dataset.
6. Experimental Results
6.1. The Integrated CNN Performance
- P_N is the number of model parameter, L_N is the model depth, F_T is the forward propagation time, B_T is the error backward propagation time, CLF_T is the average time required to identify a single image, Tr_T is the model training time, Error denotes the classification error rate over the testing dataset1.
F_T of different models are of the same order of magnitude.
B_T is dramatically different.
6.2. SOTA Comparison
- The classification accuracies of two-step way  , single-layered NN  and DNN included in the table are the ones reported in their respective references. (The datasets are different. But it is understandable that the re-implementation is difficult.)
- The prediction accuracy ( > 90%) of learned feature-based methods is generally superior to the ones ( < 90%) whose use handcrafted features.
The classification accuracy of SFA on the simulated testing dataset is 96.99%, which is slightly lower than AlexNet’s 97.74%.
Nevertheless, it is still better than the DNN model of 95.2%.
The classification accuracy of SFGN is 98.12%, which outperforms the SFA model but less than the classification performance of the ensemble classifier of 98.89%.
In addition, the classification performance of SFA, SFGN and the ensemble classifier on the real/natural blur datasets are 93.75%, 95.81% and 96.72%, respectively.