Review — Blur Classification Using Wavelet Transform and Feed Forward Neural Network (Blur Classification)

Blur Classification Using Wavelet + Neural Network

Sik-Ho Tsang
4 min readJun 12, 2021
Blur Classification Framework

In this story, Blur Classification Using Wavelet Transform and Feed Forward Neural Network, (Tiwari IJMECS’14), by Mody Institute of Technology & Science, is briefly reviewed. In this paper:

  • Features are extracted in wavelet domain and are fed into a feed forward neural network for blur classification.

This is a paper in 2014 IJMECS. (Sik-Ho Tsang @ Medium)

Outline

  1. Preprocessing
  2. Feature Extraction Using Wavelet
  3. Neural Network
  4. Experimental Results

1. Preprocessing

Grayscale Image After 2D Hanning window
  • First, the color image obtained by the digital camera is converted into an 8-bit grayscale image.
  • The 2D Hanning window gives a fine trade-off between forming a smooth transition towards the image borders and maintaining enough image information in power spectrum.
  • A centered portion of size 128×128 is cropped.

2. Feature Extraction Using Wavelet

  • Matlab Wavelet transforms toolbox used for wavelet transform with decomposition level 3.
  • The Haar wavelet is chosen the Haar wavelet is theoretically straightforward and precisely reversible without edge effects.
  • The Haar transform does not have overlapping windows, which reflects only changes between adjacent pairs of pixels.
  • The energy of an approximation image, i.e. LL, is generally not considered as a feature.
  • The mean and standard deviation of the coefficients related with each decomposition is calculated.
  • The mean of a detail image Ii is calculated as:
  • where M×N is the size of detail image and Ei is the energy of detail image and Energy Ei is calculated by the sum of absolute values of wavelet coefficients:
  • The standard deviation of a detail image is calculated as:
  • So, a feature vector of size 18 is obtained, which consists of 9 mean energies and 9 standard deviations:

3. Neural Network

  • 350 images from a QR code image database [23] is used.
  • The three different classes of blur i.e. motion, defocus and joint blur were synthetically introduced with different parameters to make the databases of 1050 blurred images (i.e., 350 images for each class of blur).
  • A three layer Neural network was created with 18-nodes in the first (input) layer corresponding to size of input feature vector. 1 to 50 nodes in the hidden layer, and 3 nodes in the output layer (i.e. one node for each class).
  • The whole training and testing features set is normalized into the range of [0, 1].
  • To perform the cross-validation procedure whole feature set is divided randomly into 3 sets training set, validation set and test set with a ratio 0.5, 0.2 and 0.3 respectively.
  • Finally, 10 nodes in the hidden layer were selected to run the final simulation.

4. Experimental Results

Blur Classification results
  • The values of positive and Negative samples used as testing samples are 350 and 700, respectively for each blur categories.
  • Testing results give classifications accuracies as 99.3%, 99.7%, and 100% for motion, defocus and joint blur categories respectively.
  • These classification accuracies show the high precision of the proposed method.

Later, Tiwari IJEEI’17 outperforms the method in this story.

--

--

Sik-Ho Tsang
Sik-Ho Tsang

Written by Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.