Brief Review — Heart sound classification using signal processing and machine learning algorithms

RF/SVM+GA

Sik-Ho Tsang
3 min readDec 16, 2023
Types of Heart Sounds

Heart sound classification using signal processing and machine learning algorithms
RF/SVM+GA
, by Sharif University of Technology
2022 J. MLWA, Over 20 Citations (Sik-Ho Tsang @ Medium)

Heart Sound Classification
20132022 [CirCor Dataset] [CNN-LSTM] [DsaNet] [Modified Xception] [Improved MFCC+Modified ResNet] [Learnable Features + VGGNet/EfficientNet] [DWT + SVM] [MFCC+LSTM] [DWT+ 1D-CNN] [CNN+Attention] 2023 [2LSTM+3FC, 3CONV+2FC] [NRC-Net] [Log-MelSpectrum+Modified VGGNet]
==== My Other Paper Readings Are Also Over Here ====

  • The desired features are first extracted after retrieving the data by signal processing algorithms.
  • Next, feature selection algorithms are used to select the compelling features to reduce the problem’s dimensions.
  • Finally, some of the most popular classification algorithms are utilized for heart sound classification.

Outline

  1. Dataset, Preprocessing, Feature Extraction & ML Methods
  2. Results

1. Dataset, Preprocessing, Feature Extraction & ML Methods

1.1. Dataset

1.2. Overall Flowchart

Overall flowchart
  • The overall flowchart is shown above.

1.3. Preprocessing

Noisy Heart Sound

The noises are eliminated using Savitzky–Golay filter.

  • Each of the four heart sounds described previously is divided into five separate ranges, each with an average duration of 0.16 s.

1.4. Feature Extraction

Feature Extraction
  • Statistical Features: The statistical features include standard deviation, skewness, and kurtosis. They are used to examine how the data is distributed.
  • Signal Features: include amplitude and dominant frequencies. Amplitude is the maximum displacement or distance made by a point on a wave measured from its equilibrium position. Besides, as the lowest frequency component is known as the fundamental frequency, the dominant frequency is the fundamental frequency with the highest amplitude.
  • Wavelet Features: The Daubechies wavelet is chosen to extract the features.
  • Information Theory: The entropy is extracted as feature.
  • Besides the above feaures, the Mel Frequency Cepstral Coefficients (MFCCs) are also extracted as features.

Genetic Algorithm (GA) is used to select feature subset for dimension reduction.

1.5. ML Methods

A bunch of ML methods are tried: Gradient boosting, random forest, and SVM.

2. Results

2.1. Before Dimension Reduction

Performance Before Dimension Reduction
  • The classification algorithms were utilized before and after executing the dimension reduction and the feature selection algorithms.
  • Before implementing the dimension reduction and the feature selection algorithms, the algorithms’ outcomes, including the confusion matrices and the performance measures in terms of precision, recall, and F1-score, were shown in Tables 2–7 above.
  • The best accuracy was achieved by using the gradient boosting algorithm with an accuracy of 87.5%.

2.2. After Dimension Reduction

Accuracy After Dimension Reduction
  • In the case of using GA, after implementing the algorithm, it gave out 21 features. The shape of these datasets were (156, 36), (156, 3), and (156, 21) based on PCA, LDA, and GA, respectively.

The best performance was achieved when RFC or SVC algorithm alongside GA with 78% accuracy.

2.3. MFCC as Features

Performance When MFCC as Features

The best accuracy was 95% when the gradient boosting algorithm was used to classify the sound.

In the two-class classification, the best result was 98% in terms of accuracy.

--

--

Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.