Brief Review — An automated snoring sound classification method based on local dual octal pattern and iterative hybrid feature selector
DWT + LDOP + RFINCA + kNN on MPSSC
An automated snoring sound classification method based on local dual octal pattern and iterative hybrid feature selector
DWT + LDOP + RFINCA + kNN , by Firat University
2021 J. BSPC (Sik-Ho Tsang @ Medium)Snore Sound Classification
2017 [INTERSPEECH 2017 Challenges: Addressee, Cold & Snoring] 2018 [MPSSC] [AlexNet & VGG-19 for Snore Sound Classification] 2019 [CNN for Snore] 2020 [Snore-GAN]
==== My Healthcare and Medical Related Paper Readings ====
==== My Other Paper Readings Are Also Over Here ====
- Multilevel discrete wavelet transform (DWT) decomposition and the LDOP based feature generation are used for feature extraction,
- Informative features selection is done by ReliefF and iterative neighborhood component analysis (RFINCA).
- Classification is performed using k nearest neighbors (kNN).
Outline
- DWT + LDOP + RFINCA + kNN
- Results
1. DWT + LDOP + RFINCA + kNN
1.1. Discrete Wavelet Transform (DWT)
Step 0: First, load the snoring sound (SS).
Step 1: Then, apply 7 leveled DWT to SS with symlet 8 filter.
1.2. Local Dual Octal Pattern (LDOP)
Step 2: Generate features using LDOP.
- It is a one-dimensional feature generation function. It uses two octal blocks and one center value. Therefore, 17 sized overlapping blocks are utilized for feature generation.
- In brief, left and right histograms are extracted using LDOP and they are concatenated to obtain a feature vector with a size of 512.
Step 3: As there are 8 filters, eight 512-features are concatenated, 8×512 = 4096 feature vector fv is obtained.
Step 4: Apply min-max normalization to fv.
- (Please read the paper for the details of LDOP.)
1.3. ReliefF and Iterative Neighborhood Component Analysis (RFINCA)
- Step 5: Calculate weights of the ReliefF using ReliefF function, X and target (actual classes). X^P defines positive weighted features obtained by ReliefF.
- Step 6: Apply NCA to X^P and calculate the sorted index of the features. Weights are updated using a Manhattan distance based fitness function and an optimization method.
- Step 7: Use the iterative feature selection procedure and calculate the loss value of each selected feature. In this step, a range of the number of features is determined to decrease computational cost. The range selected is from 40 to 540. Optimal features are selected using minimum loss valued features.
1.4. k nearest neighbors (kNN)
- Step 8: Classify final selected features (feature) using kNN classifier.
2. Results
- MPSSC dataset is used.
Table 5 denoted that the proposed SSC method reached approximately 22% higher UAR than the best of others.
Also, it achieved higher classification rates than deep learning methods without set millions of parameters.
- Besides, authors claimed that the proposed method has a benefit that it is a a lightweight method due to the use of handcrafted features.
- (Please feel free to read the paper directly for more details about the methods and experimental results.)