Vertex.AI

Vertex.AI - Announcing PlaidML: Open Source #DeepLearning for Every Platform

  • Our company uses PlaidML at the core of our deep learning vision systems for embedded devices, and to date we’ve focused on support for image processing neural networks like ResNet-50, Xception, and MobileNet.
  • We wrote about this in a previous post comparing PlaidML inference throughput to TensorFlow on cuDNN.
  • After updating to Keras 2.0.8, cuDNN 6, and Tensorflow 1.3, it’s within about 4% of PlaidML’s throughput: – – It’s a great improvement and we continue to use TensorFlow as our benchmark for other areas where PlaidML is less mature.
  • Briefly, the system requirements are: – – To get PlaidML installed and do a quick benchmark all you need to do is: – – By default, plaidbench will benchmark 1024 inferences at batch size 1 using Keras on PlaidML and print a result similar to the following: – – In…
  • Then run plaidbench with the “no-plaid” option: – – The output should look like the following: – – PlaidML can take longer to execute on the first run, but tends to outperform TensorFlow + cuDNN, even on the latest NVIDIA hardware (in this case by about 14%).

We’re pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is make deep learning accessible to every person on every device, and we’re building PlaidML to help make that a reality. We’re starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel. Additionally, we’re including support for running the widely popular Keras framework on top of Plaid to allow existing code and tutorials to run unchanged.

We’re pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is make deep learning accessible to every person on every device, and we’re building PlaidML to help make that a reality. We’re starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel. Additionally, we’re including support for running the widely popular Keras framework on top of Plaid to allow existing code and tutorials to run unchanged.

Our company uses PlaidML at the core of our deep learning vision systems for embedded devices, and to date we’ve focused on support for image processing neural networks like ResNet-50, Xception, and MobileNet. By sharing this technology we see potential to greatly improve the accessibility of deep learning. This release is just one early step. Currently PlaidML supports Keras, OpenCL, and Linux. In the future, we’ll be adding support for macOS and Windows. We’ll also be adding compatibility with frameworks such as TensorFlow, PyTorch, and Deeplearning4j. For vision workloads we’ve shown results on desktop hardware competitive with hand-tuned but vendor-locked engines like cuDNN; we will continue that work but we’ll also add broader task support such as as recurrent nets to support video, speech, and text processing.

Throughput…

Vertex.AI