Google’s new machine learning framework is going to put more AI on your phone

Google’s new machine learning framework is going to put more AI on your phone

  • At the moment, artificial intelligence lives in the cloud, but Google — and other big tech companies — want it work directly on your devices, too.
  • At Google I/O today, the search giant announced a new initiative to help its AI make this leap down to earth: a mobile-optimized version of its machine learning framework named TensorFlowLite.
  • The newly announced version, TensorFlowLite, will build on this, helping users slim down their machine learning algorithms to work on-device.
  • The company also announced that an API for making machine learning work better with phone chips would be coming sometime in the future — a clear sign that Google thinks your next phone will have an AI-optimized chip in it.
  • TensorFlowLite should help Google (and the wider AI research community) bring even more interesting functions like this to our most-used and most-important devices.

At the moment, artificial intelligence lives in the cloud, but Google — and other big tech companies — want it work directly on your devices, too. At Google I/O today, the search giant announced a…
Continue reading “Google’s new machine learning framework is going to put more AI on your phone”

Star Trek IBM’s Watson to Power Bridge Crew VR Interactive Speech Experience

IBM's Watson to Power Bridge Crew #VR Interactive Speech Experience  #ai

  • IBM’s Watson will power in-game voice command for Ubisoft’s upcoming release of Star Trek: Bridge Crew during an experimental Beta period later this summer following the game’s launch on May 30.
  • In-game speech experiences, built with IBM Watson for Star Trek: Bridge Crew will be available this summer in Beta for cross-platform play.
  • The Watson and Star Trek: Bridge Crew partnership will allow players to give direct, interactive speech commands to virtual Starfleet shipmates.
  • “For the first time, Watson will power the technology that makes it possible for gamers and fans of Star Trek to interact with the crew,” said Willie Tejada, Chief Developer Advocate, IBM.
  • For more information visit Star Trek: Bridge Crew and IBM VR Speech Sandbox.

Star Trek News – IBM’s Watson to power Bridge Crew VR interactive speech experience and make code available to all developers. Details at… 
Continue reading “Star Trek IBM’s Watson to Power Bridge Crew VR Interactive Speech Experience”

Try The Everypixel Tool To See What A Computer Thinks Of Your Best Shot

#AI can now predict whether or not humans will think your photo is awesome

  • The Aesthetics tool, still in beta testing, allows users to upload a photo and get an auto-generated list of tags, as well as a percentage rate on the “chance that this image is awesome.”
  • According to developers, the neural network was trained to view an image much in the same way a human photo editor would, looking at factors such as color, sharpness, and subject.
  • As early users report, the system seems to be fairly good at recognizing factors like whether or not the image is sharp and if the composition is interesting, but it is certainly far from a pair of human eyes.
  • While the results of just how “awesome” a photo is may not be accurate for every image, the auto-tagging tool could prove useful, generating a list of keywords from object recognition as well as less concrete terms, like love, happiness, and teamwork.
  • Clicking on a keyword will bring up an Everypixel search for other images with that same tag, or users can copy and paste the list of keywords.

Can a computer judge art? A new neural network program will rank photos by their probability of being awesome.
Continue reading “Try The Everypixel Tool To See What A Computer Thinks Of Your Best Shot”

Deep Learning AMI for Ubuntu v1.3_Apr2017 Now Supports Caffe2

Deep Learning AMI on Amazon Web Services quickly added Caffe2 along with TensorFlow & others

  • We are excited to announce that the AWS Deep Learning AMI for Ubuntu now supports the newly launched Caffe2 project led by Facebook.
  • The Deep Learning AMI v1.3_Apr2017 for Ubuntu provides a stable, secure, and high-performance execution environment for deep learning applications running on Amazon EC2.
  • The AWS Deep Learning AMI (available for Amazon Linux and Ubuntu) and the AWS Deep Learning CloudFormation Template let you quickly deploy and run any of the major deep learning frameworks at any scale.
  • The AWS Deep Learning AMI is provided and supported by Amazon Web Services, for use on Amazon EC2.
  • There is no additional charge for the AWS Deep Learning AMI – you only pay for the AWS resources needed to store and run your applications.

We are excited to announce that the AWS Deep Learning AMI for Ubuntu now supports the newly launched Caffe2 project led by Facebook. AWS is the best and most open place for developers to run deep learning, and the addition of Caffe2 adds yet another choice. To learn more about Caffe2, check out the the Caffe2 developer site or the GitHub repository.
Continue reading “Deep Learning AMI for Ubuntu v1.3_Apr2017 Now Supports Caffe2”

Try The Everypixel Tool To See What A Computer Thinks Of Your Best Shot

#AI can now predict whether or not humans will think your photo is awesome

  • PicsArt’s new AI shows you similar images

    The Aesthetics tool, still in beta testing, allows users to upload a photo and get an auto-generated list of tags, as well as a percentage rate on the “chance that this image is awesome.”

  • According to developers, the neural network was trained to view an image much in the same way a human photo editor would, looking at factors such as color, sharpness, and subject.
  • As early users report, the system seems to be fairly good at recognizing factors like whether or not the image is sharp and if the composition is interesting, but it is certainly far from a pair of human eyes.
  • While the results of just how “awesome” a photo is may not be accurate for every image, the auto-tagging tool could prove useful, generating a list of keywords from object recognition as well as less concrete terms, like love, happiness, and teamwork.
  • Clicking on a keyword will bring up an Everypixel search for other images with that same tag, or users can copy and paste the list of keywords.

Can a computer judge art? A new neural network program will rank photos by their probability of being awesome.
Continue reading “Try The Everypixel Tool To See What A Computer Thinks Of Your Best Shot”

Google’s new machine learning API recognizes objects in videos

Google’s new machine learning API recognizes objects in videos

  • At its Cloud Next conference in San Francisco, Google today announced the launch of a new machine learning API for automatically recognizing objects in videos and making them searchable.
  • The new Video Intelligence API will allow developers to build applications that can automatically extract entities from a video.
  • Until now, most similar image recognition APIs available in the cloud only focused on doing this for still images, but with the help of this new API, developers will be able to build applications that let users search and discover information in videos.
  • Besides extracting metadata, the API allows you to tag scene changes in a video.

Google’s new machine learning API recognizes objects in videos
Continue reading “Google’s new machine learning API recognizes objects in videos”

Create Realistic Synthetic Faces That Look Older With Deep Learning – News Center

New face aging #AI system can help identify people who have been missing for decades.

  • Developers from Orange Labs in France developed a deep learning system that can quickly make young faces look older, and older faces look younger.
  • Using CUDA, Tesla K40 GPUs and cuDNN for the deep learning work, they trained their neural network on 5,000 faces from each age group (0-18, 19- 29, 30-39, 40-49, 50-59, and 60+ years old) taken from the Internet Movie Database and from Wikipedia and then labeled with the person’s age — this helped the system learn the characteristic signature of faces in each age group.
  • A second neural network, called the face discriminator, looks at the synthetically aged face to see whether the original identity can still be picked out.
  • If it can’t, the image is rejected, which they call the process in their paper, Age Conditional Generative Adversarial Network.
  • Grigory Antipov of Orange Labs mentioned the technique could be used in applications such as helping identify people who have been missing for many years.

Developers from Orange Labs in France developed a deep learning system that can quickly make young faces look older, and older faces look younger. A number of techniques already exist, but they are expensive and time consuming.
Continue reading “Create Realistic Synthetic Faces That Look Older With Deep Learning – News Center”