China announces goal of leadership in artificial intelligence by 2030

China announces goal of leadership in artificial intelligence by 2030

  • BEIJING — China’s government has announced a goal of becoming a global leader in artificial intelligence in just over a decade, putting political muscle behind growing investment by Chinese companies in developing self-driving cars and other advances.
  • Artificial intelligence is one of the emerging fields along with renewable energy, robotics and electric cars where communist leaders hope to take an early lead and help transform China from a nation of factory workers and farmers into a technology pioneer.
  • Already, Chinese companies including Tencent Ltd., Baidu Inc. and Alibaba Group are spending heavily to develop artificial intelligence for consumer finance, e-commerce, self-driving cars and other applications.
  • The announcement follows a sweeping plan issued in 2015, dubbed “Made in China 2025,” that calls for this country to supply its own high-tech components and materials in 10 industries from information technology and aerospace to pharmaceuticals.
  • China has had mixed success with previous strategic plans to develop technology industries including renewable energy and electric cars.

AI is one of the emerging fields — along with renewable energy, robotics and electric cars — where communist leaders hope to take an early lead
Continue reading “China announces goal of leadership in artificial intelligence by 2030”

GitHub

A #Java Toolbox for Scalable Probabilistic #MachineLearning

  • The AMIDST Toolbox allows you to model your problem using a flexible probabilistic language based on graphical models.
  • AMIDST Toolbox has been used to track concept drift and do risk prediction in credit operations, and as data is collected continuously and reported on a daily basis, this gives rise to a streaming data classification problem.
  • As an example, the following figure shows how the data processing capacity of our toolbox increases given the number of CPU cores when learning an a probabilistic model (including a class variable C, two latent variables (dashed nodes), multinomial (blue nodes) and Gaussian (green nodes) observable variables) using the AMIDST’s learning engine.
  • As can be seen, using our variational learning engine, AMIDST toolbox is able to process data in the order of gigabytes (GB) per hour depending on the number of available CPU cores with large and complex PGMs with latent variables.
  • If your data is really big and can not be stored in a single laptop, you can also learn your probabilistic model on it by using the AMIDST distributed learning engine based on a novel and state-of-the-art distributed message passing scheme implemented on top of Apache Flink.

toolbox – A Java Toolbox for Scalable Probabilistic Machine Learning
Continue reading “GitHub”

Train Neural Machine Translation Models with Sockeye

New on the AWS #AI Blog: Train Neural Machine Translation Models with Sockeye.

  • Sockeye, which is built on Apache MXNet, does most of the heavy lifting for building, training, and running state-of-the-art sequence-to-sequence models.
  • Sockeye provides both a state-of-the-art implementation of neural machine translation (NMT) models and a platform to conduct NMT research.
  • You can easily change the basic model architecture, including the following elements:

    Sockeye also supports more advanced features, such as:

    For training, Sockeye gives you full control over important optimization parameters.

  • If you have a GPU available, install Sockeye for CUDA 8.0 with the following command:

    To install it for CUDA 7.5, use this command:

    Now you’re all set to train your first German-to-English NMT model.

  • You also learned how to use Sockeye, a sequence-to-sequence framework based on MXNet, to train and run a minimal NMT model.

Have you ever wondered how you can use machine learning (ML) for translation? With our new framework, Sockeye, you can model machine translation (MT) and other sequence-to-sequence tasks. Sockeye, which is built on Apache MXNet, does most of the heavy lifting for building, training, and running state-of-the-art sequence-to-sequence models.
Continue reading “Train Neural Machine Translation Models with Sockeye”

Intel Democratizes Deep Learning Application Development with Launch of Movidius Neural Compute Stick

Introducing the world’s first USB-based #deeplearning inference kit:  #Intel

  • Today, Intel launched the Movidius™ Neural Compute Stick, the world’s first USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge.
  • Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.
  • More: Movidius Press Kit | Movidius Neural Compute Stick Product Brief | Intel at CVPR Fact Sheet

    As more developers adopt advanced machine learning approaches to build innovative applications and solutions, Intel is committed to providing the most comprehensive set of development tools and resources to ensure developers are retooling for an AI-centric digital economy.

  • Whether it is training artificial neural networks on the Intel® Nervana™ cloud, optimizing for emerging workloads such as artificial intelligence, virtual and augmented reality, and automated driving with Intel® Xeon® Scalable processors, or taking AI to the edge with Movidius vision processing unit (VPU) technology, Intel offers a comprehensive AI portfolio of tools, training and deployment options for the next generation of AI-powered products and services.
  • “The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device,” said Remi El-Ouazzane, vice president and general manager of Movidius, an Intel company.

Today, Intel launched the Movidius™ Neural Compute Stick, the world’s first USB-based deep learning inference kit and self-contained artificial intelligence (AI) accelerator that delivers dedicated deep neural network processing capabilities to a wide range of host devices at the edge. Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.
Continue reading “Intel Democratizes Deep Learning Application Development with Launch of Movidius Neural Compute Stick”

Movidius launches a $79 deep-learning USB stick

Bringing AI to hardware is as easy as plugging in Movidius's new $79 USB device

  • and Intel have put deep-learning on a stick with a tiny $79 USB device that makes bringing AI to hardware a snap.
  • In April of last year, Movidius showed off the first iteration of this device, which they then called the Fathom Neural Compute Stick.
  • The Movidius Neural Computer Stick tosses one of these VPUs into a USB 3.0 stick giving product developers and researchers the ability to enable prototyping, validation and deployment of inference applications offline, bringing about a number of latency and power consumption improvements.
  • It supports

    When connected to constrained host computing devices like Raspberry Pi, the compute stick offers pull-and-play intelligence.

  • Getting acquired has offered Movidius some more flexibility to add features like the ability to plug in several of these sticks to add more deep-learning power.

https://youtu.be/VioTPaYcF98 Movidius and Intel have put deep-learning on a stick with a tiny $79 USB device that makes bringing AI to hardware a snap. In..
Continue reading “Movidius launches a $79 deep-learning USB stick”

Intel Movidius Neural Compute Stick brings AI brains to USB port

Intel lets you stick an #AI brain into your USB port

  • Intel’s $80 Movidius Neural Compute Stick lets you plug some computing brains into your laptop’s USB port.
  • The device, geared for tinkerers and programmers, can crank out 100 billion mathematical calculations per second while consuming a paltry 1 watt of power.
  • That’s the kind of thing that can be handy if you’re trying to work out computer vision in your drone or help your cleaning robot tell the difference between a cat and a coffee table.
  • Intel announced the device at the conference on Computer Vision and Pattern Recognition on Thursday.
  • Artificial intelligence — and more specifically a brain-like approach called neural networks to machine-learning technology — is sweeping the industry as a new way to do everything from recognize speech to identify what ingredients are in your lunch.

The $80 Movidius Neural Compute Stick is tuned for tinkerers and engineers who want to give neural network technology a whirl.
Continue reading “Intel Movidius Neural Compute Stick brings AI brains to USB port”

Tony Paikeday’s answer to How can I build my own artificial intelligence system?

Can I build my own artificial intelligence system? Tony Paikeday of @nvidia discusses

  • In this setting, it may be perfectly fine follow a meandering path as you piece together a system including GPU’s, drivers, libraries, and deep learning frameworks that interest you, sifting through potentially hundreds of pages of documentation, as you take on the role of “system integrator”.
  • NVIDIA DGX Systems see a 30% increase in deep learning performance compared with other systems built using the same Tesla V100 GPU’s, but lacking integrated, optimized deep learning software.
  • The important takeaway here is that, even if you build an A.I. system on your own, using the absolute latest GPU technology, that system would still be at a performance disadvantage relative to an integrated hardware and software system that’s fully-optimized and software-engineered for maximum performance of each deep learning framework.
  • Alternatively, A.I. appliances like NVIDIA’s DGX, that include access to popular deep learning frameworks like TensorFlow, Caffe2, MXNet and more, as well as supporting libraries, all integrated with the hardware, can save considerable time and money.
  • Additionally, with the experimental nature of data science and A.I., developers often find themselves (or their teams) needing to simultaneously experiment with different combinations of system resources and software configurations, in order to determine which model can derive insights fastest.

Like a lot of things, the answer is “it depends”. If we take deep learning as an example of an increasingly popular A.I. workload, building an AI system for deep learning training on your datasets is largely a function of the resources, expertise and amount of infrastructure you have readily accessible. For example, the system you might employ as an independent developer, or as a researcher in a smaller setting, would look considerably different from what you would need to support a larger organization’s efforts to “A.I.-enable” their business interactions with customers, or improve the quality of clinical care, or detect fraud in a voluminous flow of financial transaction data. Ultimately this becomes of question of whether you design and build your own system, or employ a purpose-built solution for your problem.
Continue reading “Tony Paikeday’s answer to How can I build my own artificial intelligence system?”