Review — Breast Lesion Classification in Ultrasound Images Using Deep Convolutional Neural Network

A LeNet-Like CNN is Proposed

Sik-Ho Tsang
5 min readDec 28, 2022

Breast Lesion Classification in Ultrasound Images Using Deep Convolutional Neural Network, Zeimarani ACCESS’20, by Federal University of Amazonas, Fundação Centro de Controle de Oncologia do Estado do Amazonas FCECON, and Federal University of Rio de Janeiros, 2020 ACCESS, Over 20 Citations (Sik-Ho Tsang @ Medium)
Medical Imaging, Medical Image Analysis, Image Classification

  • The dataset used in this work consists of a limited number of cases, 641 in total, histopathologically categorized (413 benign and 228 malignant lesions).
  • First, due to a limited number of training data, a custom-built CNN with a few hidden layers is used and regularization techniques are applied to improve the performance.
  • Second, transfer learning is used and some pre-trained models are adapted for the dataset.

Outline

  1. Pre-Processing
  2. CNN Model Architecture
  3. Results

1. Pre-Processing

The effect of applying bilinear interpolation and zero padding on a sample image: (a) original image, (b) image obtained from the original one through bilinear interpolation and zero-padding.

1.1. Resizing

  • The first step adjust the images size to CNN architecture, 224×224 pixels. A bilinear interpolation is performed on the original image, size 159×182 pixels, and obtained an intermediate image size 201×224 pixels.
  • After, zero-padding the vertical dimension, obtained a final image size of 224×224 pixels.

1.2. Class Balancing

  • The original image database has 413 benign images and 228 malignant images.
  • 185 malignant images were chosen randomly and after applying image flips, to this randomly chosen images, the total number of malignant cases were increased to 413. Therefore, the final image dataset was comprised of 826 images (413 benign and 413 malignant).

1.3. Normalization

  • zero centering and normalization are performed to obtain obtain zero mean and unit variance:
  • where x represents the original image, x the zero-centered image, N is the number of samples in the data set and x’’ the normalized zero-centered image.

2. CNN Model Architecture

2.1. Model Architecture

The proposed CNN architecture.
  • A LeNet-like CNN, consists of four convolutional layers, is used.
  • In the first convolutional layer, 32 filters of size 3×3 are used.
  • In the second convolutional layer, 64 filters of size 7×7 are used.
  • In the third convolution layer, 128 filters of size 5×5 are used.
  • In the last convolutional layer, 256 filters of size 3×3 are used.
  • In all convolutional operations, the stride of 1 and zero-padding of 1 are used. The activation function of all convolutional layers is ReLU.
  • In between convolutions, after ReLU, a 2×2 max-pooling layer is used for dimensional reduction.
  • Batch normalization is applied after each convolutional layer before the non-linearity.
  • The last convolutional layer is followed by two fully connected layers. The first and second fully connected layers are followed by a ReLU and by a Softmax activation function.
  • A binary logistic regression with cross-entropy loss, or a binary classification is used.

2.2. Training

Examples of Image rotation and Flips.
  • 500 epochs are used. The mini-batch size is 128.
  • For image augmentation, various image reflections, rotations, and translations were used to generate a new dataset. This new data set contains 41630 images.
  • The Dropout was employed after the first fully connected layer, with a probability of 0.5 and L2 regularization with a fixed regularization factor of 0.05.

3. Results

3.1. Ablations

Performance metrics of the network, using different optimizers.

Using SGDM resulted in a slight improvement in AUC value and therefore selected as the candidate.

The ROC Curves, and the AUC value of the proposed method.
Performance metrics after applying image augmentation and regularization.

Image augmentation associated with appropriate regularization techniques increased both accuracy and AUC.

3.2. SOTA Comparisons

Performance comparison of proposed method versus pre-trained models.

Some pretrained models are fine-tuned, e.g.: VGG-19, GoogLeNet and ResNet-50. The proposed method, with simple architecture, obtains a little bit higher AUC.

CNN has much higher AUC compared with other feature selection methods.

Chi-square test applied to evaluate statistically significant differences between the proposed method and SOTA approaches.
  • A comparison of hits and errors is also performed.
  • The null hypothesis is that there are no significant statistical differences. The adopted significance level was 99.0%, and degree of freedom was equal to 1, resulting in a critical value of X² of tc=6.63.

The values of the significance tests shown in the table are all higher than tc, so the null hypothesis must be rejected.

The ROC Curves, and the AUC value of the proposed method vs. radiologist’s diagnosis.
Performance comparison of the method versus radiologists classifications.

The proposed method outperformed the radiologists’ evaluations in terms of accuracy and sensitivity but falls below the radiologist performance regarding specificity, precision, and false alarm.

Reference

[2020 ACCESS] [Zeimarani ACCESS’20]
Breast Lesion Classification in Ultrasound Images Using Deep Convolutional Neural Network

4.1. Biomedical Image Classification

2017 [ChestX-ray8] 2019 [CheXpert] 2020 [VGGNet for COVID-19] [Dermatology] [Deep-COVID] [Zeimarani ACCESS’20] 2021 [CheXternal] [CheXtransfer]

==== My Other Previous Paper Readings ====

--

--

Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.