Brief Review — FFANet: Feature fusion attention network to medical image segmentation
FFANet, Uses Improved VoVNet as Backbone
FFANet: Feature fusion attention network to medical image segmentation,
FFANet, by Hebei University of Technology,
2021 J. BSPC (Sik-Ho Tsang @ Medium)Biomedical Image Segmentation
2015 … 2022 [UNETR] [Half-UNet] [BUSIS] [RCA-IUNet] 2023 [DCSAU-Net]
==== My Other Paper Readings Are Also Over Here ====
- VoVNet is used as the backbone to extract multi-scale features.
- Secondly, multi-scale features aggregation module is used to extract context information fully.
- Finally, an attention module is adopted to consider the relevance of each spatial and channel.
Outline
- FFANet
- Results
1. FFANet
1.1. Overall Architecture
- The blue box in the picture is FF module. Input the image to the backbone to get F0, F1, F2, F3, F4.
- Then, F1, F2, F3, F4 are sent into FF module, and feature M is output.
- M′ is obtained by up sampling M. M′ and F0 are fused by concatenation operation, and then sent them to the mixed domain attention module to output the final result.
1.2. VoVNet Backbone
- VoVNet is composed of four One-Shot Aggregation (OSA) modules.
- Residual branch is added in OSA.
- A channel attention module (ECA Module) is added to backbone.
1.3. ECA Module
- ECA module is used in ECA-Net, which is a light-weight attention module.
1.3. Mixed Domain Attention Module
- The module generates (channel/ position) attention matrix to representative the relevance between any two pixels.
- The generated matrix is multiplied by the original feature.
- The above product is added to the original matrix to obtain more context information.
- The concept is similar to Transformer.
2. Results
2.1. CHAOS Dataset
The proposed network has achieved the best performance on the CHAOS dataset.
2.2. ISIC 2017 Dataset
- The proposed network has achieved the best performance on the ISIC 2017 dataset.
2.3. Ablation Experiment
Residual branch, ECA channel attention, FF module and the mixed domain attention module improve the segmentation performance to varying degrees.
The proposed method surpasses PSPNet with an increase of 2.4%.