Review — SimGAN: Learning from Simulated and Unsupervised Images through Adversarial Training (GAN)

Synthetic Images Become More Realistic

Simulated+Unsupervised (S+U) Learning in SimGAN

Outline

1. Overview of SimGAN

Overview of SimGAN

2. Adversarial Loss with Self-Regularization

2.1. Discriminator D

2.2. Refiner R (Generator)

3. Local Adversarial Loss

Illustration of local adversarial loss

4. Updating Discriminator using a History of Refined Images

Illustration of using a history of refined images

4. Experimental Results

4.1. Network Architecture

4.1.1. Eye Gaze Estimation

4.1.2. Hand Pose Estimation

4.2. Appearance-based Gaze Estimation

4.2.1. Qualitative Results

Example output of SimGAN for the UnityEyes gaze estimation dataset

4.2.2. Self-regularization in Feature Space

Self-regularization in feature space for color images

4.2.3. Using a History of Refined Images for Updating the Discriminator

Using a history of refined images for updating the discriminator

4.2.4. Visual Turing Test

Results of the ‘Visual Turing test’ user study for classifying real vs refined images

4.2.5. Quantitative Results

Comparison of SimGAN to the state-of-the-art on the MPIIGaze dataset of real eyes

4.3. Hand Pose Estimation from Depth Images

4.3.1. Qualitative Results

Example refined test images for the NYU hand pose dataset

4.3.2. Quantitative Results

Comparison of a hand pose estimator trained on synthetic data, real data, and the output of SimGAN. (The results are at distance d = 5 pixels from ground truth.)

4.3.3. Importance of Using a Local Adversarial Loss

Importance of using a local adversarial loss

Reference

Generative Adversarial Network (GAN)

My Other Previous Paper Readings

PhD, Researcher. I share what I've learnt and done. :) My LinkedIn: https://www.linkedin.com/in/sh-tsang/, My Paper Reading List: https://bit.ly/33TDhxG