magenta/README.md at master · tensorflow/magenta · GitHub

open sourced sketch-rnn.

  • For small to medium sized datasets, dropout and data augmentation is very useful technique to avoid overfitting.
  • This type of data augmentation is very powerful when used on small datasets, and is unique to vector drawings, since it is difficult to dropout random characters or notes in text or midi data, and also not possible to dropout random pixels without causing large visual differences in pixel image data.
  • If there is virtually no difference for a human audience when they compare an augmented example compared to a normal example, we apply both data augmentation techniques regardless of the size of the training dataset.
  • As mentioned before, recurrent dropout and data augmentation should be used when training models on small datasets to avoid overfitting.
  • If you want to create your own dataset, you must create three lists of examples for training/validation/test sets, to avoid overfitting to the training set.

magenta – Magenta: Music and Art Generation with Machine Intelligence

@hardmaru: open sourced sketch-rnn.

Before jumping in on any code examples, please first set up your Magenta environment.

Examples of vector images produced by this generative model.

, the recurrent neural network model described in Teaching Machines to Draw and A Neural Representation of Sketch Drawings.

, we will stop optimizing for this term.

For small to medium sized datasets, dropout and data augmentation is very useful technique to avoid overfitting. We have provided options for input dropout, output dropout, and recurrent dropout without memory loss. In practice, we only use recurrent dropout, and usually set it to between 65% to 90% depending on the dataset. Layer Normalization and Recurrent Dropout can be used together, forming a powerful combination for training recurrent neural nets on a small dataset.

, and still maintain a similar-looking vector image. This type of data augmentation is very powerful when used on small datasets, and is unique to vector drawings, since it is difficult to dropout random characters or notes in text or midi data, and also not possible to dropout random pixels without causing large visual differences in pixel image data. We usually set both data augmentation parameters to 10% to 20%. If there is virtually no difference for a human audience when they compare an augmented example compared to a normal example, we apply both data augmentation techniques regardless of the size of the training dataset.

Using dropout and data augmentation effectively will avoid overfitting to a small training set.

dataset and the model will use this lightweight dataset by default.

for viewing training curves for the various losses on train/validation/test datasets.

Here is a list of full options for the model, along with the default settings:

data_set= ‘aaron_sheep.npz’, # Our dataset. num_steps= 10000000, # Total number of training set. Keep large. save_every= 500, # Number of batches per checkpoint creation. dec_rnn_size= 512, # Size of decoder. dec_model= ‘lstm’, # Decoder: lstm, layer_norm or hyper. enc_rnn_size= 256, # Size of encoder. enc_model= ‘lstm’, # Encoder: lstm, layer_norm or hyper. z_size= 128, # Size of latent vector z. Recommend 32, 64 or 128. kl_weight= 0.5, # KL weight of loss equation. Recommend 0.5 or 1.0. kl_weight_start= 0.01, # KL start weight when annealing. kl_tolerance= 0.2, # Level of KL loss at which to stop optimizing for KL. batch_size= 100, # Minibatch size. Recommend leaving at 100. grad_clip= 1.0, # Gradient clipping. Recommend leaving at 1.0. num_mixture= 20, # Number of mixtures in Gaussian mixture model. learning_rate= 0.001, # Learning rate. decay_rate= 0.9999, # Learning rate decay per minibatch. kl_decay_rate= 0.99995, # KL annealing decay rate per minibatch. min_learning_rate= 0.00001, # Minimum learning rate. use_recurrent_dropout= True, # Recurrent Dropout without Memory Loss. Recomended. recurrent_dropout_prob= 0.90, # Probability of recurrent dropout keep. use_input_dropout= False, # Input dropout. Recommend leaving False. input_dropout_prob= 0.90, # Probability of input dropout keep. use_output_dropout= False, # Output droput. Recommend leaving False. output_dropout_prob= 0.90, # Probability of output dropout keep. random_scale_factor= 0.15, # Random scaling data augmention proportion. augment_stroke_prob= 0.10, # Point dropping augmentation proportion. conditional= True, # If False, use decoder-only model.

) works best.

We have tested this model on TensorFlow 1.0 and 1.1 for Python 2.7.

Due to size limitations, this repo does not contain any datasets.

, and contains training/validation/test set sizes of 70000/2500/2500 examples.

files in this sub directory.

, if you wish to use them locally. As mentioned before, recurrent dropout and data augmentation should be used when training models on small datasets to avoid overfitting.

Please create your own interesting datasets and train this algorithm on them! Getting your hands dirty and creating new datasets is part of the fun. Why settle on existing pre-packaged datasets when you are potentially sitting on an interesting dataset of vector line drawings? In our experiments, a dataset size consisting of a few thousand examples was sufficient to produce some meaningful results. Here, we describe the format of the dataset files the model expects to see.

. Below is an example sketch of a turtle using this format:

and normalize accordingly before training.

of 100. Deviate at your own peril.

parameter of 2.0. We suggest you build a dataset where the maximum sequence length is less than 250.

If you have a large set of simple SVG images, there are some available libraries to convert subsets of SVGs into line segments, and you can then apply RDP on the line segments before converting the data to stroke-3 format.

dataset, for both conditional and unconditional training mode, using vanilla LSTM cells and LSTM cells with Layer Normalization. These models will be downloaded by running the Jupyter Notebook. They are stored in:

In addition, we have provided pre-trained models for selected QuickDraw datasets:

Let’s get the model to interpolate between a cat and a bus!

parameter to control the level of uncertainty.

If you find this project useful for academic purposes, please cite it as:

magenta/README.md at master · tensorflow/magenta · GitHub