Brief Review — cU-Net+PE: Simultaneous Segmentation and Classification of Bone Surfaces from Ultrasound Using a Multi-feature Guided CNN
Modified U-Net for Both Classification & Segmentation
Simultaneous Segmentation and Classification of Bone Surfaces from Ultrasound Using a Multi-feature Guided CNN
cU-Net+PE, by Rutgers University, and Rutger Robert Wood Johnson Medical School, 2018 MICCAI, Over 40 Citations (Sik-Ho Tsang @ Medium)
Medical Image Analysis, Medical Image Classification, Medical Image Segmentation
- U-Net is modified to support both classification and Segmentation at the same time.
Outline
- cU-Net & cU-Net+PE
- Results
1. cU-Net & cU-Net+PE
1.1. Inputs
- The input takes the concatenation of B-mode Ultrasound (US) scan (US(x, y)) and three filtered image features:
- 1.1.1. Local Phase Tensor Image (LPT(x, y)): LPT(x, y) image is computed by defining odd and even filter responses using [5]:
- where Teven and Todd represent the symmetric and asymmetric features of US(x, y). H, ∇ and ∇2 represent the Hessian, Gradient and Laplacian operations, respectively:
- 1.1.2. Local Phase Bone Image (LP(x, y)): LP(x, y) image is computed using:
- where LPE(x, y) and LwPA(x, y) represent the local phase energy and local weighted mean phase angle image features, respectively:
- 1.1.3. Bone Shadow Enhanced Image (BSE(x, y)): BSE(x, y) image is computed by modeling the interaction of the US signal within the tissue as scattering and attenuation information using [6]:
- where CMLP(x, y) is the confidence map image obtained by modeling the propagation of US signal inside the tissue taking into account bone features present in LP(x, y) image [6]. USA(x, y), maximizes the visibility of high intensity bone features inside a local region.
Thus, except the US scan image input, the input also includes extracted features based on [5] and [6], i.e. LPT, LP, and BSE, which consists of a 4×256×256 matrix.
1.2. Model Architecture
- Pre-enhancing Network (PE): contains seven convolutional layers with 32 feature maps and one with single feature map.
- U-Net: is used except the differences below:
- The MaxPooling layers and the convolutional layers in the contracting path are replaced by the convolutional layers with stride two.
- The feature maps at the last convolution layer of the contracting path (left side) is input to a classifier that consists of one fully-connected layer with a final 4-way softmax layer.
- BN is used before every ReLU layers.
- The number of starting feature maps is reduced from 32 to 16.
- Cross entropy loss is used for both segmentation and classification tasks.
2. Results
- A random split of US images from SonixTouch in training (80%) and testing (20%) sets, is used. The training set consists of a total of 415 images obtained from SonixTouch only. The rest 104 images from SonixTouch and all 131 images from Clarius C3 were used for testing.
The above table shows that the proposed cU-net+PE outperforms other methods on test scans obtained from both US machines.
Reference
[2018 MICCAI] [cU-Net+PE]
Simultaneous Segmentation and Classification of Bone Surfaces from Ultrasound Using a Multi-feature Guided CNN
1.11. Biomedical Multi-Task Learning
2018 [ResNet+Mask R-CNN] [cU-Net+PE]