Brief Review — GAN: Generative Adversarial Networks
An Overview Paper of GAN in Year of 2020, Written by Authors of GAN
Generative Adversarial Networks
GAN Overview, 2020 Communcations of ACM (Sik-Ho Tsang @ Medium)
This article is written by authors of GANGenerative Adversarial Network (GAN)
Image Synthesis: 2014 … 2019 [SAGAN]
==== My Other Paper Readings Are Also Over Here ====
- Generative Adversarial Networks (GANs) are able to generate more examples from the estimated probability distribution.
- GANs are among the most successful generative models (especially in terms of their ability to generate realistic high-resolution images), and applied to a wide variety of tasks.
- This is an overview paper written by the same group of authors of GAN. Thus, except the original GAN in 2014, we can also cite this overview paper.
Outline
- Introduction
- GAN Preliminaries
- Recent Advances
1. Introduction
- Most current approaches to developing artificial intelligence are based primarily on machine learning.
- The most widely used and successful form of machine learning to date is supervised learning, which maps inputs to outputs.
- The most common kind of supervised learning is classification. The learning process itself still falls far short of human abilities.
Many researchers today study unsupervised learning, often using generative models. In this overview paper, one particular approach to unsupervised learning via generative modeling called generative adversarial networks (GANs) is described.
2. GAN Preliminaries
In generative modeling, training examples x are drawn from an unknown distribution pdata(x). The goal of a generative modeling algorithm is to learn a pmodel(x) that approximates pdata(x) as closely as possible.
The discriminator tries to predict whether the input was real or fake. while the generator’s cost encourages it to generate samples that the discriminator incorrectly classifies as real.
- The goal of a machine learning algorithm is to find a local Nash equilibrium: a point that is a local minimum of each player’s cost with respect to that player’s parameters. With local moves, no player can reduce its cost further, assuming the other player’s parameters do not change.
3. Recent Advances
- Besides talking about the basic of GAN, authors also discuss about the recent advances.
GANs were introduced in order to create a deep implicit generative model that was able to generate true samples from the model distribution in a single generation step, without need for the incremental generation process or approximate nature of sampling Markov chains.
- Today, the most popular approaches to generative modeling are probably GANs, variational autoencoders, and fully-visible belief nets. None of these approaches relies on Markov chains, so the reason for the interest in GANs today is not that they succeeded at their original goal of generative modeling without Markov chains, but rather that they have succeded in generating high-quality images and have proven useful for several tasks other than straightforward generation.
- Authors mentioned that it is difficult to give much further specific guidance regarding the details of GANs because GANs are such an active research area and most specific advice quickly becomes out of date.
- The above figure shows how quickly the capabilities of GANs have progressed in the years since their introduction.