GitHub

Image-to-image translation in PyTorch  #machinelearning

  • Download the CycleGAN datasets using the following script:

    To train a model on your own datasets, you need to create a data folder with two subdirectories and that contain images from domain A and B.

  • Download the pix2pix datasets using the following script:

    We provide a python script to generate pix2pix training data in the form of pairs of images {A,B}, where A and B are two different depictions of the same underlying scene.

  • Corresponding images in a pair {A,B} must be the same size and have the same filename, e.g., is considered to correspond to .
  • Once the data is formatted this way, call:

    This will combine each pair of images (A,B) into a single image file, ready for training.

  • CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

    pix2pix: Image-to-image translation with conditional adversarial nets

    iGAN: Interactive Image Generation via Generative Adversarial Networks

    If you love cats, and love reading cool graphics, vision, and learning papers, please check out the Cat Paper Collection:

    [Github] [Webpage]

pytorch-CycleGAN-and-pix2pix – Image-to-image translation in PyTorch (e.g. horse2zebra, edges2cats, and more)

@algoritmic: Image-to-image translation in PyTorch #machinelearning

This is our ongoing PyTorch implementation for both unpaired and paired image-to-image translation.

The code was written by Jun-Yan Zhu and Taesung Park.

Check out the original CycleGAN Torch and pix2pix Torch code if you would like to reproduce the exact same results as in the papers.

Written by Christopher Hesse

If you use this code for your research, please cite:

Jun-Yan Zhu*, Taesung Park*, Phillip Isola, Alexei A. Efros

In arxiv, 2017. (* equal contributions)

Image-to-Image Translation with Conditional Adversarial Networks

Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros

In CVPR 2017.

Linux or OSX.

Python 2 or Python 3.

Install python libraries dominate.

Clone this repo:

Download a CycleGAN dataset (e.g. maps):

Train a model:

Test the model:

Download a pix2pix dataset (e.g.facades):

Train a model:

Test the model:

directory.

for test flags.

for multi-GPU mode.

to start the server.

Download the CycleGAN datasets using the following script:

: 400 images from the CMP Facades dataset.

: 2975 images from the Cityscapes training set.

: 1096 training images scraped from Google Maps.

: 1273 summer Yosemite images and 854 winter Yosemite images were downloaded using Flickr API. See more details in our paper.

: both classes of images were downlaoded from Flickr. The training set size of each class is iPhone:1813, DSLR:3316. See more details in our paper.

if you have test data.

completely fails.

Download the pix2pix datasets using the following script:

: 400 images from CMP Facades dataset.

: 2975 images from the Cityscapes training set.

: 1096 training images scraped from Google Maps

We provide a python script to generate pix2pix training data in the form of pairs of images {A,B}, where A and B are two different depictions of the same underlying scene. For example, these might be pairs {label map, photo} or {bw image, color image}. Then we can learn to translate A to B or B to A:

, etc).

Once the data is formatted this way, call:

This will combine each pair of images (A,B) into a single image file, ready for training.

add reflection and other padding layers.

fully test CPU mode and multi-GPU mode.

CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

pix2pix: Image-to-image translation with conditional adversarial nets

iGAN: Interactive Image Generation via Generative Adversarial Networks

If you love cats, and love reading cool graphics, vision, and learning papers, please check out the Cat Paper Collection:

[Github] [Webpage]

Code is inspired by pytorch-DCGAN.

GitHub