Vertex.AI

Vertex.AI - Announcing PlaidML: Open Source #DeepLearning for Every Platform

  • Our company uses PlaidML at the core of our deep learning vision systems for embedded devices, and to date we’ve focused on support for image processing neural networks like ResNet-50, Xception, and MobileNet.
  • We wrote about this in a previous post comparing PlaidML inference throughput to TensorFlow on cuDNN.
  • After updating to Keras 2.0.8, cuDNN 6, and Tensorflow 1.3, it’s within about 4% of PlaidML’s throughput: – – It’s a great improvement and we continue to use TensorFlow as our benchmark for other areas where PlaidML is less mature.
  • Briefly, the system requirements are: – – To get PlaidML installed and do a quick benchmark all you need to do is: – – By default, plaidbench will benchmark 1024 inferences at batch size 1 using Keras on PlaidML and print a result similar to the following: – – In…
  • Then run plaidbench with the “no-plaid” option: – – The output should look like the following: – – PlaidML can take longer to execute on the first run, but tends to outperform TensorFlow + cuDNN, even on the latest NVIDIA hardware (in this case by about 14%).

We’re pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is make deep learning accessible to every person on every device, and we’re building PlaidML to help make that a reality. We’re starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel. Additionally, we’re including support for running the widely popular Keras framework on top of Plaid to allow existing code and tutorials to run unchanged.
Continue reading “Vertex.AI”

Taxonomy of Methods for Deep Meta Learning

Taxonomy of Methods for Deep Meta Learning #NeuralNetworks #DeepLearning

  • A recent paper, “Evolving Deep Neural Networks” provides a comprehensive list of global parameters that are typically used in the conventional search approaches (i.e. Learning rate) as well as more hyperparameters that involve more details about the architecture of the Deep Learning network.
  • Two recent papers that were submitted to ICLR 2017 explore the use of Reinforcement learning to learn new kinds of Deep Learning architectures (“Designing Neural Network Architectures using Reinforcement Learning” and “Neural Architecture Search with Reinforcement Learning”).
  • The first paper describes the use of Reinforcement Q-Learning to discover CNN architectures, you can find some of their generated CNNs in Caffe These are the different parameters that are sampled by the MetaQNN algorithm:

    The second paper (Neural Architecture Search) employs uses Reinforcement Learning (RL) to train a an architecture generator LSTM to build a language that describes new DL architectures.

  • In all the above approaches, the method employs different search mechanisms (i.e. Grid, Gaussian Processes, Evolution, Q-Learning, Policy Gradients) to discover (among the many generated architectures) better configurations.
  • We have a glimpse of a DSL driven architecture in my previous post about “A Language Driven Approach to Deep Learning Training” where a prescription that is quite general is presented.


This post discusses a variety of contemporary Deep Meta Learning methods, in which meta-data is manipulated to generate simulated architectures. Current meta-learning capabilities involve either support for search for architectures or networks inside networks.

Continue reading “Taxonomy of Methods for Deep Meta Learning”