Topics for Further Reading

Can’t get enough of GANs? Whether you’re still getting acquainted with foundational concepts, trying to keep up in a quickly moving field, or just looking for fun applications, we’ve put together some selected resources with a little something for everyone.

What are GANs and how do they work?

Ian Goodfellow, widely credited for sparking the GAN revolution with his 2014 paper "Generative Adversarial Nets", discusses GANs at NeurIPS 2016.

Evaluating GAN performance

What do we know about GANs, and what have we yet to discover?

How about some non-visual examples?

Just for fun

Select one of the images below to play games with GANs!

A brief history of GAN development

2014
Generative Adversarial Nets

The original GAN


Conditional Generative Adversarial Nets

Directing generation with labeled data

2015
Deep Convolutional Generative Adversarial Networks

Improvements to the original GAN architecture make DCGAN the new baseline

2016
InfoGAN

Encoding meaningful features to make GANs interpretable


Improved Techniques for Training GANs

Methods for improving training stability and encouraging convergence

2017
Wasserstein GAN

Earth Mover loss function stabilizes training and prevents mode collapse


Progressive Growing of GANs

Adding layers as training progresses enables modeling of increasingly fine details

2018
Self-Attention GAN

Incorporating attention improves image generation quality


BigGAN

Sometimes bigger really is better


StyleGAN

Upgrading progressive generation with feature control

2019
Lipschitz GANs

Lipschitz constraint guarantees the existence and uniqueness of the optimal discriminative function