Vertex.AI

Vertex.AI - Announcing PlaidML: Open Source #DeepLearning for Every Platform

  • Our company uses PlaidML at the core of our deep learning vision systems for embedded devices, and to date we’ve focused on support for image processing neural networks like ResNet-50, Xception, and MobileNet.
  • We wrote about this in a previous post comparing PlaidML inference throughput to TensorFlow on cuDNN.
  • After updating to Keras 2.0.8, cuDNN 6, and Tensorflow 1.3, it’s within about 4% of PlaidML’s throughput: – – It’s a great improvement and we continue to use TensorFlow as our benchmark for other areas where PlaidML is less mature.
  • Briefly, the system requirements are: – – To get PlaidML installed and do a quick benchmark all you need to do is: – – By default, plaidbench will benchmark 1024 inferences at batch size 1 using Keras on PlaidML and print a result similar to the following: – – In…
  • Then run plaidbench with the “no-plaid” option: – – The output should look like the following: – – PlaidML can take longer to execute on the first run, but tends to outperform TensorFlow + cuDNN, even on the latest NVIDIA hardware (in this case by about 14%).

We’re pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is make deep learning accessible to every person on every device, and we’re building PlaidML to help make that a reality. We’re starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel. Additionally, we’re including support for running the widely popular Keras framework on top of Plaid to allow existing code and tutorials to run unchanged.
Continue reading “Vertex.AI”

The Good, Bad, & Ugly of TensorFlow

The Good, Bad, and Ugly of #TensorFlow. #BigData #DeepLearning #MachineLearning  #AI

  • If you are deploying a model to a cloud environment, you want to know that your model can execute on the hardware available to it, without unpredictable interactions with other code that may access the same hardware.
  • For example, the Udacity tutorials and the RNN tutorial using Penn TreeBank data to build a language model are very illustrative, thanks to their simplicity.
  • For me, holding mental context for a new framework and model I’m building to solve a hard problem is already pretty taxing, so it can be really helpful to inspect a totally different representation of a model; the TensorBoard graph visualization is great for this.
  • But good programmers know it is much harder to write code that humans will use, versus code that a machine can compile and execute.
  • We appreciate their strategy of integrating new features and tests first so early adopters can try things before they are documented.

A survey of six months rapid evolution (+ tips/hacks and code to fix the ugly stuff) from Dan Kuster, one of indico’s deep learning researchers.
Continue reading “The Good, Bad, & Ugly of TensorFlow”