Brief Review — Gated recurrent unit‑based heart sound analysis for heart failure screening

2-Layer 64-Unit GRU

Sik-Ho Tsang
3 min readFeb 3, 2024
Workflow

Gated recurrent unit‑based heart sound analysis for heart failure screening
2-Layer 64-Unit
GRU, by Chongqing University, The First Affiliated Hospital of Chongqing Medical University
2020 J. BioMed Eng OnLine, Over 40 Citations (Sik-Ho Tsang @ Medium)\

Heart Sound Classification
2013
2023 [2LSTM+3FC, 3CONV+2FC] [NRC-Net] [Log-MelSpectrum + Modified VGGNet] [CNN+BiGRU] [CWT+MFCC+DWT+CNN+MLP] [LSTM U-Net (LU-Net)]
==== My Other Paper Readings Are Also Over Here ====

  • The logistic regression-based hidden semi-Markov model was adopted to segment HS frames.
  • Normalized frames were taken as the input of the proposed GRU model without any denoising and hand-crafted feature extraction.

Outline

  1. 2-Layer 64-Unit GRU
  2. Results

1. 2-Layer 64-Unit GRU

1.1. Dataset & Preprocessing

Localization and Segmentation

The HS data used in this paper contain three categories Heart failure (HF) with reduced ejection fraction (HFrEF), HF preserved ejection fraction (HFpEF), and normal.

  • All recordings are down-sampled at 600 Hz in accordance with Nyquist Sampling Theorem.
  • Then, logistic regression-based hidden semi-Markov model (LR-HSMM) is selected to localize the onset of S1.
  • The duration of a cardiac cycle is about 0.6–0.8 s, thus the frame length is fixed as 1.6 s, which includes approximately two cardiac cycles.
  • Depicted as Fig. 7a above, the frames are segmented with an interval of one cardiac cycle. Whenever the frame length exceeds two periods, overlap is inherent, which is exemplified in Fig. 7b.

A total of 23,120 HS frames have been segmented, which, respectively, include the frames of HFrEF, HFpEF and normal are 7670, 7710 and 7740.

  • Finally, max-min normalization is performed.
  • This normalized signal is used as the input of GRU.

1.2. 2-Layer 64-Unit GRU

Proposed 2-Layer 64-Unit GRU

The above 2-Layer 64-Unit GRU is used as the proposed model.

  • FCN, SVM, and LSTM are used for comparisons.

2. Results

2.1. GRU vs LSTM

GRU vs LSTM

GRU outperforms LSTM.

2.2. Number of Layers & Number of Hidden Units

Number of Layers & Number of Hidden Units
  • 2 layers with 64 hidden units of GRU are the best choices.

2.3. SOTA Comparisons

10-Fold
Accuracy Boxplot

It can be seen that GRU achieves the best average accuracy of 98.82%, which is 2.53%, 4.17% and 11.2% higher than LSTM, fully convolutional network (FCN) and SVM.

2.4. Confusion Matrix

Confusion Matrix (Percentage)

--

--

Sik-Ho Tsang
Sik-Ho Tsang

Written by Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.

No responses yet