Brief Review — Multitask Classification and Segmentation for Cancer Diagnosis in Mammography

ResNet-based FCN With 2 Heads for Multi-Task Learning

Sik-Ho Tsang
2 min readApr 11, 2023

Multitask Classification and Segmentation for Cancer Diagnosis in Mammography,
FCN+S-Net+C-Net, by Conservatoire National des Arts et Metiers, and GE Healthcare, 2019 MIDL, Over 30 Citations (Sik-Ho Tsang @ Medium)

Biomedical Image Multi-Task Learning
20182020 [BUSI] [Song JBHI’20] [cGAN JESWA’20] 2021 [Ciga JMEDIA’21] [CMSVNetIter]
==== My Other Paper Readings Are Also Over Here ====

  • A Multi-Task learning (MTL) scheme is proposed, which combines pixel-level segmentation and global image-level classification annotations for cancer diagnosis in mammography.

Outline

  1. FCN+S-Net+C-Net
  2. Results

1. FCN+S-Net+C-Net

Proposed Model Architecture
  • Backbone: ResNet-based FCN is used as shared backbone to extract local features.
  • S-Net: The segmentation network aims at classifying each pixel in the input image into a set of K pre-defined classes. S-Net first consists in adding a transfer layer of 1×1 convolution to K feature maps.
  • Then, an upsampling process is performed to create the semantic segmentation.
  • A weighted cross-entropy loss Lseg to address the class-imbalance issue.
  • C-Net: First, local features are aggregated with a global average pooling (GAP). The second step consists in the last fully connected layer to get the final probability of cancer. The classification loss Lcls as a standard binary cross entropy function.
  • The total joint loss function is:

2. Results

Segmentation and classification performances on DDSM
  • One setting is to sequentially train the segmentation model and finetune local features for the classification task, and the proposed one is jointly train both tasks one.
  • By pretraining the model on segmentation, the classification performance is slightly improved by  1 pt to AUC=81.37%, compared to the pure classification with AUC=80.54%.

As for the proposed joint method, a significant gain is achieved both in segmentation and classification of 3.5 pts (meanDice=38.28%) and  2.5 pts (AUC=84.02%) respectively.

Segmentation and classi cation examples on DDSM

The proposed joint method outperforms the sequential one for both classification and segmentation, and succeeds to capture lesions with highly precise localisation capability.

--

--

Sik-Ho Tsang
Sik-Ho Tsang

Written by Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.

No responses yet