magenta/ at master · tensorflow/magenta · GitHub

open sourced sketch-rnn.

  • For small to medium sized datasets, dropout and data augmentation is very useful technique to avoid overfitting.
  • This type of data augmentation is very powerful when used on small datasets, and is unique to vector drawings, since it is difficult to dropout random characters or notes in text or midi data, and also not possible to dropout random pixels without causing large visual differences in pixel image data.
  • If there is virtually no difference for a human audience when they compare an augmented example compared to a normal example, we apply both data augmentation techniques regardless of the size of the training dataset.
  • As mentioned before, recurrent dropout and data augmentation should be used when training models on small datasets to avoid overfitting.
  • If you want to create your own dataset, you must create three lists of examples for training/validation/test sets, to avoid overfitting to the training set.

magenta – Magenta: Music and Art Generation with Machine Intelligence
Continue reading “magenta/ at master · tensorflow/magenta · GitHub”