Vertex.AI - Announcing PlaidML: Open Source #DeepLearning for Every Platform

  • Our company uses PlaidML at the core of our deep learning vision systems for embedded devices, and to date we’ve focused on support for image processing neural networks like ResNet-50, Xception, and MobileNet.
  • We wrote about this in a previous post comparing PlaidML inference throughput to TensorFlow on cuDNN.
  • After updating to Keras 2.0.8, cuDNN 6, and Tensorflow 1.3, it’s within about 4% of PlaidML’s throughput: – – It’s a great improvement and we continue to use TensorFlow as our benchmark for other areas where PlaidML is less mature.
  • Briefly, the system requirements are: – – To get PlaidML installed and do a quick benchmark all you need to do is: – – By default, plaidbench will benchmark 1024 inferences at batch size 1 using Keras on PlaidML and print a result similar to the following: – – In…
  • Then run plaidbench with the “no-plaid” option: – – The output should look like the following: – – PlaidML can take longer to execute on the first run, but tends to outperform TensorFlow + cuDNN, even on the latest NVIDIA hardware (in this case by about 14%).

We’re pleased to announce the next step towards deep learning for every device and platform. Today Vertex.AI is releasing PlaidML, our open source portable deep learning engine. Our mission is make deep learning accessible to every person on every device, and we’re building PlaidML to help make that a reality. We’re starting by supporting the most popular hardware and software already in the hands of developers, researchers, and students. The initial version of PlaidML runs on most existing PC hardware with OpenCL-capable GPUs from NVIDIA, AMD, or Intel. Additionally, we’re including support for running the widely popular Keras framework on top of Plaid to allow existing code and tutorials to run unchanged.
Continue reading “Vertex.AI”

Oxford University’s lip-reading artificial intelligence (funded by Google DeepMind) is more accurate than humans — Quartz

Oxford University's lip-reading artificial intelligence (funded by Google DeepMind) is more…

  • When teaching the AI how to read lips, the Oxford team used a carefully curated set of videos.
  • Teaching AI to read lips is a base skill that can be applied to countless situations.
  • According to OpenAI’s Jack Clark, getting this to work in the real world will take three major improvements: a large amount of video of people speaking in real-world situations, getting the AI to be capable of reading lips from multiple angles, and varying the kinds of phrases the AI can predict.
  • To train the system, researchers showed the AI nearly 29,000 videos labelled with the correct text, each three seconds long.
  • Oxford University’s lip-reading AI is more accurate than humans, but still has a way to go

Even professional lip-readers can figure out only 20% to 60% of what a person is saying. Slight movements of a person’s lips at the speed of natural speech are immensely difficult to reliably understand, especially from a distance or if the lips are obscured. And lip-reading isn’t just a plot point in NCIS: It’s an essential tool to understand…
Continue reading “Oxford University’s lip-reading artificial intelligence (funded by Google DeepMind) is more accurate than humans — Quartz”