Review — Context Encoders: Feature Learning by Inpainting

Context Encoders for Inpainting & Self-Supervised Learning

Semantic Inpainting results on held-out images for context encoder trained using reconstruction and adversarial loss

Outline

1. Context Encoders for Image Generation

Context Encoders for Image Generation

1.1. Pipeline

1.2. Encoder

1.3. Channel-Wise Fully-Connected Layer

1.4. Decoder

2. Loss function

2.1. Reconstruction Loss

2.2. The Adversarial Loss

2.3. Joint Loss

3. Region Masks

3.1. Central Region

3.2. Random Block

3.3. Random Region

4. Two CNN Architectures

4.1. CNN for Inpainting

Context Encoder for Inpainting

4.2. CNN for Feature Learning

Context Encoder for Feature Learning

5. Inpainting Results

Comparison with Content-Aware Fill (Photoshop feature based on [2]) on held-out images.
Semantic Inpainting using different methods on held-out images
Semantic Inpainting accuracy for Paris StreetView dataset on held-out images.

6. Feature Learning Results

Quantitative comparison for classification, detection and semantic segmentation

Reference

Self-Supervised Learning

My Other Previous Paper Readings

PhD, Researcher. I share what I've learnt and done. :) My LinkedIn: https://www.linkedin.com/in/sh-tsang/, My Paper Reading List: https://bit.ly/33TDhxG