A Review of Computer-Aided Heart Sound Detection Techniques
Review Article for Heart Sound Segmentation & Classification
A Review of Computer-Aided Heart Sound Detection Techniques
Li BioMed Research Int’20, by Jilin University, The First Hospital of Jilin University
2020 Hindawi BioMed Research International, Over 80 Citations (Sik-Ho Tsang @ Medium)Heart Sound Classification
2013… 2021 [CardioXNet] 2022 [CirCor Dataset] [CNN-LSTM] [DsaNet] [Modified Xception] [Improved MFCC+Modified ResNet] 2023 [2LSTM+3FC, 3CONV+2FC] [NRC-Net]
==== My Other Paper Readings Are Also Over Here ====
- In this paper, the latest development of the computer-aided heart sound detection techniques over the last 5 years (This is a paper in 2020) has been reviewed, including denoising, segmentation, feature extraction and classification; with emphasis, the applications of deep learning algorithm in heart sound processing.
1. Heart Sound
- The normal duration of systole is about 0.35 sec and that of diastole is about 0.45 sec, for a total of 0.8 sec in a complete cycle. These values are closely related to the occurrence of cardiovascular diseases.
- Figure 3 shows two normal cardiac cycles.
2. Segmentation
- The purpose of segmentation is to find the beginning and end of heart sounds.
At that moment, the methods used for heart sounds segmentation mainly include hidden Markov models (HMM), WT, and correlation coefficient matrices, etc.
3. Feature Extraction and Classification
At that moment, DWT, continuous wavelet transformation (CWT), short-time Fourier transform (STFT) and Mel Frequency Cepstrum Coefficient (MFCC) are commonly used methods for heart sounds feature extraction.
SVM, kNN, BP neural network, and logistic regression are commonly utilized, which are machine learning based methods.
4. Deep Learning
- Table 4 lists the representative literature on the deep learning applied in the classification of heart sound signals over the past five years.
- Yet, deep learning faces some challenges.
First of all, there are too many parameters of the deep learning model, with a large amount of data to be optimized, a long execution time and a large training data set required.
Secondly, the deep learning modelling calls for higher configuration of the computer with powerful CPU and GPU for calculation, hence the experiment cost is high.