Review — Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation

Axial-DeepLab, for Both Image Classification & Segmentation

Sik-Ho Tsang
6 min readFeb 22, 2023

Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation,
Axial-DeepLab, by Johns Hopkins University, and Google Research,
2020 ECCV, Over 400 Citations (Sik-Ho Tsang @ Medium)
Image Classification, Panoptic Segmentation, Instance Segmentation, Semantic Segmentation
==== My Other Paper Readings Are Also Over Here ====

  • Conventional 2D self-attention which has very high computational complexity is factorized into two 1D self-attentions.
  • A position-sensitive self-attention design is proposed.
  • Combining both yields the position-sensitive axial-attention layer.
  • By stacking the position-sensitive axial-attention layers, Axial-DeepLab models are formed for image classification and dense prediction.

Outline

  1. Position-Sensitive Axial-Attention Layer
  2. Axial-DeepLab
  3. Results

1. Position-Sensitive Self-Attention Layer

1.1. Conventional Self-Attention

  • Given an input feature map x with height h, width w, and channels din, the output at position o=(i, j), yo computed by pooling over the projected input as:
  • where N is the whole…

--

--

Sik-Ho Tsang
Sik-Ho Tsang

Written by Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.

No responses yet