Watch Artificial Intelligence Lose Its Mind While Watching Bob Ross

Watch Artificial Intelligence Lose Its Mind While Watching Bob Ross | @IFLScience  #AI

  • It features an episode of the stoner-favorite television show The Joy of Painting with Bob Ross through Google’s neural network DeepDream.
  • DeepDream is a convolutional neural network, a style of computing inspired by the brain, that identifies and recognizes images and patterns.
  • As Reben explains in the description: “This artwork represents what it would be like for an AI to watch Bob Ross on LSD (once someone invents digital drugs).
  • The unique characteristics of the human voice are learned and generated as well as hallucinations of a system trying to find images which are not there.”
  • Google made the code for DeepDream open-source, meaning there are plenty of videos, images, and apps that utilize it.

Ever wondered what it would be like for artificial intelligence to trip-out while watching Bob Ross paint a pretty picture?
Continue reading “Watch Artificial Intelligence Lose Its Mind While Watching Bob Ross”

Accelerating open machine learning research with Cloud TPUs

  • Our goal is to ensure that the most promising researchers in the world have access to enough compute power to imagine, implement, and publish the next wave of ML breakthroughs.
  • We’re setting up a program to accept applications for access to the TensorFlow Research Cloud and will evaluate applications on a rolling basis.
  • The program will be highly selective since demand for ML compute is overwhelming, but we specifically encourage individuals with a wide range of backgrounds, affiliations, and interests to apply.
  • The program will start small and scale up.

Researchers need enormous computational resources to train the machine learning models that have delivered
recent advances in medical imaging, speech recognition, game playing, and many other domains. The TensorFlow
Research Cloud is a cluster of 1,000 Cloud TPUs that provides the machine learning research community with
a total of 180 petaflops of raw compute power — at no charge — to support the next wave of breakthroughs.
Continue reading “Accelerating open machine learning research with Cloud TPUs”

Which deep learning network is best for you?

#DeepLearning: Which deep learning network is best for you? #BigData

  • Caffe is a popular deep learning network for vision recognition.
  • Caffe 2 continues the strong support for vision type problems but adds in recurrent neural networks (RNN) and long short term memory (LSTM) networks for natural language processing, handwriting recognition, and time series forecasting.
  • MXNet supports deep learning architectures such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) including Long Short-Term Memory (LTSM) networks.
  • However, with Facebook’s most recent announcement, it is changing course and making Caffe 2 its primary deep learning framework so it can deploy deep learning on mobile devices.
  • DL4J has a rich set of deep network architecture support: RBM, DBN, Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), RNTN, and Long Short-Term Memory (LTSM) network.

Open source deep learning neural networks are coming of age. There are several frameworks that are providing advanced machine learning and artificial intelligence (A.I.) capabilities over proprietary solutions. How do you determine which open source framework is best for you?
Continue reading “Which deep learning network is best for you?”

GitHub

End-to-end automatic speech recognition from scratch in #Tensorflow  #NeuralNetworks

  • This is a powerful library for automatic speech recognition, it is implemented in TensorFlow and support training with CPU/GPU.
  • The original TIMIT database contains 6300 utterances, but we find the ‘SA’ audio files occurs many times, it will lead bad bias for our speech recognition system.
  • Therefore, we removed the all ‘SA’ files from the original dataset and attain the new TIMIT dataset, which contains only 5040 utterances including 3696 standard training set and 1344 test set.
  • Automatic Speech Recognition transcribes a raw audio file into character sequences; the preprocessing stage converts a raw audio file into feature vectors of several frames.
  • In other words, each audio file is split into frames using the Hamming windows function, and each frame is extracted to a feature vector of length 39 (to attain a feature vector of different length, modify the settings in the file timit_preprocess.

Automatic_Speech_Recognition – End-to-end automatic speech recognition from scratch in Tensorflow
Continue reading “GitHub”

Why Future Emphasis Should be on Algorithms

Why Future Emphasis Should be on #Algorithms – Not #Code 

 #fintech #AI @TrendinTech

  • People figured that if they could find a way to codify instructions to a machine to tell it what steps to take, any manual operation could be eliminated saving any business time and money.
  • Algorithms, on the other hand, are a series of steps that describe a way of solving a problem that meets the criteria of both being correct and ability to be terminated if need be.
  • Instead of writing code to search our data given a set of parameters of the certain pattern as traditional coding focuses on, with big data we look for the pattern that matches the data.
  • Now another step’s been added to the equation that finds patterns humans don’t see, such as the certain wavelength of light, or data over a certain volume.
  • So, this new algorithmic step now successfully searches for patterns and will also create the code needed to do it.

We are all now in what’s called the “big data era,” and we’ve been here for quite some time. Once upon a time we were only just starting to piece together
Continue reading “Why Future Emphasis Should be on Algorithms”

Baidu launches SwiftScribe, an app that transcribes audio with AI

Baidu launches SwiftScribe, an app that transcribes audio with #AI

  • Baidu, the Chinese company operating a search engine, a mobile browser, and other web services, is announcing today the launch of SwiftScribe, a web app that’s meant to help people transcribe audio recordings more quickly, using — you guessed it!
  • SwiftScribe can handle up to an hour of audio in any given file, but that will take 20 minutes to process, Baidu project manager Tian Wu told VentureBeat in an interview.
  • Wu’s team believes SwiftScribe can help people transcribe audio 1.67 times faster — in 40 percent less time — than they would on their own.
  • While the product is certainly designed for transcriptionists — who are used to working on computers as opposed to mobile devices, hence the fact that SwiftScribe is only available as a web app — SwiftScribe could also come in handy for other people, like journalists and historians.
  • Today, Baidu is providing SwiftScribe as a free service — unlike Nuance’s Dragon software.

Baidu, the Chinese company operating a search engine, a mobile browser, and other web services, is announcing today the launch of SwiftScribe, a web app that’s meant to help people transcribe audio recordings more quickly, using — you guessed it! — artificial intelligence (AI).
Continue reading “Baidu launches SwiftScribe, an app that transcribes audio with AI”

Conversations on AI – Conversations on AI

Exploring the next frontier in computing, see our vision for the future of #AI:

  • Microsoft Translator is making the language barrier a thing of the past.
  • Last month, Microsoft became the first in the industry to reach parity with humans in speech recognition .
  • “Across several industry benchmarks, our computer vision algorithms have surpassed others in the industry – even humans,” said Harry Shum, executive vice president of Microsoft’s Artificial Intelligence (AI) and Research group, at a small gathering on AI in San Francisco on Dec. 13. “
  • Using this new intelligent language and speech recognition capability, Microsoft Translator can now simultaneously translate between groups speaking multiple languages in-person, in real-time, connecting people and overcoming barriers.
  • There’s also been groundbreaking work with Skype Translator – now available in 9 languages – an example of accelerating the pipeline from research to product.

Microsoft has been investing in the promise of artificial intelligence for more than 25 years — and this vision is coming to life with new chatbot Zo, Cortana Devices SDK and Skills Kit, and expansion of intelligence tools.
Continue reading “Conversations on AI – Conversations on AI”