Five years ago, artificial intelligence was struggling to identify cats. Now it’s trying to tackle 5000 species — Quartz

Five years ago, #AI was struggling to identify cats. Now it’s trying to tackle 5000 species

  • Google’s neural network, software which uses statistics to approximate how the brain learns, taught itself to detect the shapes of cats and humans with more than 70% accuracy.
  • “Over the last five years it’s been pretty incredible, the progress of deep [neural] nets,” says Grant Van Horn, lead competition organizer and graduate student at California Institute of Technology.
  • Van Horn says this latest Google competition differs from ImageNet, which forces algorithms to identify a wide variety of objects like cars and houses and boats, because iNat requires AI to examine the “nitty-gritty details” that separate one species from another.
  • On a scale from general image recognition (ImageNet) to specific (facial recognition,where most faces generally look the same and only slight variations matter), iNat lies somewhere in the middle, Van Horn says.
  • Van Horn, who has specialized in building AI that distinguishes differences between birds, said that the iNat competition illustrates how AI is beginning to help people learn about the world around them, rather than just help them organize their photos, for instance.

In 2012, Google made a breakthrough: It trained its AI to recognize cats in YouTube videos. Google’s neural network, software which uses statistics to approximate how the brain learns, taught itself to detect the shapes of cats and humans with more than 70% accuracy.  It was a 70% improvement over any other machine learning at the time. Five years later,…
Continue reading “Five years ago, artificial intelligence was struggling to identify cats. Now it’s trying to tackle 5000 species — Quartz”

Does AI actually exist yet?

Does #AI actually exist yet? 

 #fintech @GaynorRobb @AmerBanker

  • At one end of the spectrum, a pure view of AI, sits those individuals who think that until we have Data on “Star Trek” — who was a synthetic being — we don’t have AI.
  • In digital banking, chatbots facilitating customer service, machine learning to make the loan underwriting process more efficient and other innovations are improving the industry’s efficiency.
  • This definition of AI has parallels with the Turing Test, named for Alan Turing, to measure a machine’s ability to act like a human.
  • Innovations such as Alexa and the Facebook Messenger Service Bot are just examples of transformative technology.
  • But as machine learning advances in the financial services industry, it is hard to determine what is real, and what is hype and theory.

The hype surrounding voice technology, bots and machine learning suggests that artificial intelligence is increasingly common in financial services. But that is not the purist’s view of what AI represents.
Continue reading “Does AI actually exist yet?”

A Visual Introduction to Machine Learning

A Visual Introduction to #MachineLearning #abdsc

  • Using a data set about homes, we will create a machine learning model to distinguish homes in New York from homes in San Francisco.
  • Let’s say you had to determine whether a home is in San Francisco or in New York.
  • In machine learning terms, categorizing data points is a classification task.Since San Francisco is relatively hilly, the elevation of a home may be a good way to distinguish the two cities.
  • Based on the home-elevation data to the right, you could argue that a home above 240 ft should be classified as one in San Francisco.
  • The data suggests that, among homes at or below 240 ft, those that cost more than $1776 per square foot are in New York City.

This article was written by Stephanie and Tony on R2D3. 
In machine learning, computers apply statistical learning techniques to automatically identify pattern…
Continue reading “A Visual Introduction to Machine Learning”

Nearly 1 in 4 fear robots taking over the world

Nearly 1 in 4 fear robots taking over the world  #robotics #AI MT @HopeFrank

  • The Pegasystems survey of 6,000 customers across six countries found that close to three quarters (68%) of Brits express some sort of fear about AI, with almost one quarter (23%) worried about robots taking over the world.
  • Further findings revealed the potential impact of these deep-rooted fears on businesses, with less than one in three (28%) of British consumers comfortable with businesses using AI to engage with them.
  • Robots and AI were also found to confuse consumers, with the survey exposing a basic misunderstanding of AI.
  • Less than a quarter (23%) of UK consumers who report no AI experience feel at ease with businesses using AI to engage with them.
  • But for UK AI consumer veterans, this number jumps to 56% – a full 33 points higher.

A Pegasystems survey has revealed the extent of consumer AI fears, with almost one quarter (23%) of Brits were worried about robots taking over the world.
Continue reading “Nearly 1 in 4 fear robots taking over the world”

Google Brain’s new super fast and highly accurate AI: the Mixture of Experts Layer.

Google Brain’s new super fast and highly accurate #AI: the Mixture of Experts Layer

  • Google Brain’s new super fast and highly accurate AI: the Mixture of Experts Layer.Conditional Training on unreasonable large networks.One of the big problems in Artificial Intelligence is the gigantic amount of GPUs (or computers) needed to train large networks.The training time of neural networks grows quadratically (think squared) in function of their size.
  • Therefore, we have to build giant neural networks to process the ton of data that corporations like Google Microsoft have.Well, that was the case until Google released their paper Mixture of Experts Layer.The Mixture of Experts Layer as shown in the original Paper.The rough concept is to keep multiple experts inside the network.
  • Each expert is itself a neural network.
  • This does look similar to the PathNet paper, however, in this case, we only have one layer of modules.You can think of experts as multiple humans specialized in different tasks.In front of those experts stands the Gating Network that chooses which experts to consult for a given data (named x in the figure).
  • The Gating Network also decides on output weights for each expert.The output of the MoE is then:ResultsIt works surprisingly well.Take for example machine translation from English to French:The MoE with experts shows higher accuracy (or lower perplexity) than the state of the art using only 16% of the training time.ConclusionThis technique lowers the training time while achieving better than state of the art accuracy.

One of the big problems in Artificial Intelligence is the gigantic amount of GPUs (or computers) needed to train large networks. The training time of neural networks grows quadratically (think…
Continue reading “Google Brain’s new super fast and highly accurate AI: the Mixture of Experts Layer.”

Google Brain’s new super fast and highly accurate AI: the Mixture of Experts Layer.

Google Brain’s new super fast and highly accurate #AI: the Mixture of Experts Layer

  • Google Brain’s new super fast and highly accurate AI: the Mixture of Experts Layer.Conditional Training on unreasonable large networks.One of the big problems in Artificial Intelligence is the gigantic amount of GPUs (or computers) needed to train large networks.The training time of neural networks grows quadratically (think squared) in function of their size.
  • Therefore, we have to build giant neural networks to process the ton of data that corporations like Google Microsoft have.Well, that was the case until Google released their paper Mixture of Experts Layer.The Mixture of Experts Layer as shown in the original Paper.The rough concept is to keep multiple experts inside the network.
  • Each expert is itself a neural network.
  • This does look similar to the PathNet paper, however, in this case, we only have one layer of modules.You can think of experts as multiple humans specialized in different tasks.In front of those experts stands the Gating Network that chooses which experts to consult for a given data (named x in the figure).
  • The Gating Network also decides on output weights for each expert.The output of the MoE is then:ResultsIt works surprisingly well.Take for example machine translation from English to French:The MoE with experts shows higher accuracy (or lower perplexity) than the state of the art using only 16% of the training time.ConclusionThis technique lowers the training time while achieving better than state of the art accuracy.

One of the big problems in Artificial Intelligence is the gigantic amount of GPUs (or computers) needed to train large networks. The training time of neural networks grows quadratically (think…
Continue reading “Google Brain’s new super fast and highly accurate AI: the Mixture of Experts Layer.”

Blockchains are a data buffet for AIs – Fred Ehrsam – Medium

  • And while many of the tech giants working on AI like Google and Facebook have open sourced some of their algorithms, they hold back most of their data.In contrast, blockchains represent and even incent open data.
  • For example: creating a decentralized Uber requires a relatively open dataset of riders and drivers available to coordinate the network.The network effects and economic incentives around these open systems and their data can be more powerful than current centralized companies because they are open standards that anyone can build on in the same way the protocols of the internet like TCP/IP, HTML, and SMTP have achieved far greater scale than any company that sits atop them.
  • And oracle systems (a fancy way of saying getting people all over the world to report real world information to the blockchain in a way we can trust) like Augur will inject more data.This open data has the potential to commoditize the data silos most tech companies like Google, Facebook, Uber, LinkedIn, and Amazon are built on and extract rent from.
  • AIs trained on open data are more likely to be neutral and trustworthy instead of biased by the interests of the corporation who created and trained them.Since blockchains allow us to explicitly program incentive structures, they may make the incentives of AI more transparent.Simplified, AI is driven by 3 things: tools, compute power, and training data.
  • My guess is they shift to 1) creating blockchain protocols and their native tokens and 2) AIs that leverage the open, global data layer of the blockchain.

Sam Altman recently wrote that we are entering an era of hyperscale technology companies. These companies own massive troves of data with strong network effects around them and they are only getting…
Continue reading “Blockchains are a data buffet for AIs – Fred Ehrsam – Medium”