DeepMind developed an artificial intelligence algorithm to tackle “catastrophic forgetting” — Quartz

#Artificialintelligence has a multitasking problem, and DeepMind might have a solution  #ai

  • Alphabet’s AI research arm, DeepMind, is trying to change that idea with a new algorithm that can learn more than one skill.
  • Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems.
  • The research published in Proceedings of the National Academy of Sciences this week is preliminary, as it only tests the algorithm on playing different Atari games, but this research shows multi-purpose algorithms are actually possible.
  • If you train an algorithm to recognize faces and then try to train it again to recognize cows, it will forget faces to make room for all the cow-knowledge.
  • DeepMind’s new algorithm identifies and protects the equations most important for carrying out the original task, while letting the less-important ones be overwritten.

Right now it’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer. A hammer is really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows. Alphabet’s AI…

@RickKing16: #Artificialintelligence has a multitasking problem, and DeepMind might have a solution #ai

Right now it’s easiest to think about an artificial intelligence algorithm as a specific tool, like a hammer. A hammer is really good at hitting things, but when you need a saw to cut something in half, it’s back to the toolbox. Need a face recognized? Train an facial recognition algorithm, but don’t ask it to recognize cows.

Alphabet’s AI research arm, DeepMind, is trying to change that idea with a new algorithm that can learn more than one skill. Having algorithms that can learn multiple skills could make it far easier to add new languages to translators, remove bias from image recognition systems, or even have algorithms use existing knowledge to solve new complex problems. The research published in Proceedings of the National Academy of Sciences this week is preliminary, as it only tests the algorithm on playing different Atari games, but this research shows multi-purpose algorithms are actually possible.

The problem DeepMind’s research tackles is called “catastrophic forgetting,” the company writes. If you train an algorithm to recognize faces and then try to train it again to recognize cows, it will forget faces to make room for all the cow-knowledge. Modern artificial neural networks use millions of mathematic equations to calculate patterns in data, which could be the pixels that make a face or the series of words that make a sentence. These equations are connected in various ways, and are so dependent on some equations that they’ll begin to fail when even slightly tweaked for a different task. DeepMind’s new algorithm identifies and protects the equations most important for carrying out the original task, while letting the less-important ones be overwritten.

The DeepMind paper borrows this idea from research on the mammalian brain, but haven’t quite mimicked its results. The authors concede that when testing on Atari games, one neural network that learns a variety of games doesn’t perform as well as neural networks specifically trained on each game. Further work is needed on deciding which information is important and which isn’t, but DeepMind considers this a large first step in tackling the larger problem.

DeepMind developed an artificial intelligence algorithm to tackle “catastrophic forgetting” — Quartz