Review: Self-Attention Generative Adversarial Networks (SAGAN)

The proposed SAGAN generates images by leveraging complementary features in distant portions of the image rather than local regions of fixed shape to generate consistent objects/scenarios


1. Self-Attention Generative Adversarial Network (SAGAN)

The proposed self-attention mechanism

2. Techniques to Stabilize GAN Training

2.1. Spectral Normalization (SN)

2.2. Two-Timescale Update Rule (TTUR)

3. Experimental Results

3.1. Evaluating the Proposed Stabilization Techniques

Training curves for the baseline model and models with the proposed stabilization techniques
128×128 examples randomly generated by the baseline model and our models “SN on G/D” and “SN on G/D+TTUR”

3.2. Self-Attention Mechanism

Comparison of Self-Attention and Residual block on GANs
Visualization of attention maps

3.3. SOTA Comparison

Comparison of the proposed SAGAN with state-of-the-art GAN models [19, 17] for class conditional image generation on ImageNet
128×128 example images generated by SAGAN for different classes. Each row shows samples from one class


Generative Adversarial Network (GAN)

My Other Previous Paper Readings

PhD, Researcher. I share what I've learnt and done. :) My LinkedIn:, My Paper Reading List: