Generating Photorealistic Images of Fake Celebrities with Artificial Intelligence – NVIDIA Developer News Center

Researchers from @NVIDIA used #GANs to generate photorealistic images of fake celebrities.

  • Researchers from NVIDIA recently published a paper detailing their new methodology for generative adversarial networks (GANs) that generated photorealistic pictures of fake celebrities.
  • Rather than train a single neural network to recognize pictures, researchers train two competing networks.
  • “The key idea is to grow both the generator and discriminator progressively:  starting from a low resolution, we add new layers that model increasingly fine details as training progresses,” explained the researchers in their paper Progressive Growing of GANs for Improved Quality, Stability and Variation.
  • Since the publicly available CelebFaces Attributes (CelebA) training dataset varied in resolution and visual quality — and not sufficient enough for high output resolution — the researchers generated a higher-quality version of the dataset consisting of 30,000 images at 1024 x 1024 resolution.
  • Generating convincing realistic images with GANs are within reach and the researchers plan to use TensorFlow and multi-GPUs for the next part of the work.

Researchers from NVIDIA recently published a paper detailing their new methodology for generative adversarial networks (GANs) that generated photorealistic pictures of fake celebrities.

Researchers from NVIDIA recently published a paper detailing their new methodology for generative adversarial networks (GANs) that generated photorealistic pictures of fake celebrities.

One of the hottest topics in deep learning is GANs, which have the potential to create systems that learn more with less help from humans. Rather than train a single neural network to recognize pictures, researchers train two competing networks. The sparring networks learn from each other. As one works hard to find fake images, for example, the other gets better at creating fakes that are indistinguishable from the originals.

“The key idea is to grow both the generator and discriminator progressively:  starting from a low resolution, we add new layers that model increasingly fine details as training progresses,” explained the researchers in their paper Progressive Growing of GANs for Improved Quality, Stability and Variation. “This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality.”

Since the publicly available CelebFaces Attributes (CelebA) training dataset varied in resolution and visual quality — and not sufficient enough for high output resolution — the researchers generated a higher-quality version of the dataset consisting of 30,000 images at 1024 x 1024 resolution.

Using a single Tesla P100 GPU, CUDA and cuDNN with Theano and Lasagne, the team trained their network for 20 days after which there were no longer observed qualitative differences between…

Generating Photorealistic Images of Fake Celebrities with Artificial Intelligence – NVIDIA Developer News Center