IBM is teaching AI to behave more like the human brain

Can a machine make memories? How IBM is exploring neural network learning in #AI:  @engadget

  • Deep learning neural networks — the likes of which power AlphaGo as well as the current generation of image recognition and language translation systems — are the best machine learning systems we’ve developed to date.
  • While neurons use their various connections with each other to recognize patterns, “We are explicitly forcing the network to discover the relationships that exist” between pairs of objects in a given scenario, Timothy Lillicrap, a computer scientist at DeepMind told Science Magazine.When subsequently tasked in June with answering complex questions…
  • In a pair of research papers presented at the 2017 International Joint Conference on Artificial Intelligence held in Melbourne, Australia last week, IBM submitted two studies: one looking into how to grant AI an “attention span”, the other examining how to apply the biological process of neurogenesis — that is,…
  • It’s the same way that your doctor doesn’t tap your knees with that weird little hammer thing when you come in complaining of chest pain and shortness of breath.While the attention system is handy for ensuring that the network stays on task, IBM’s work into neural plasticity (how well memories…
  • Basically the attention model will cover the short term, active, thought process while the memory portion will enable the network to streamline its function depending on the current situation.But don’t expect to see AIs rivalling the depth of human consciousness anytime soon, Rish warns.

Since the days of Da Vinci’s “Ornithoper”, mankind’s greatest minds have sought inspiration from the natural world for their technological creations. It’s no di…

Mimicking our gray matter isn’t just a clever means of building better AIs, faster. It’s absolutely necessary for their continued development. Deep learning neural networks — the likes of which power AlphaGo as well as the current generation of image recognition and language translation systems — are the best machine learning systems we’ve developed to date. They’re capable of incredible feats but still face significant technological hurdles, like the fact that in order to be trained on a specific skill they require upfront access to massive data sets. What’s more if you want to retrain that neural network to perform a new skill, you’ve essentially got to wipe its memory and start over from scratch — a process known as “catastrophic forgetting”.

Compare that to the human brain, which learns incrementally rather than bursting forth fully-formed from a sea of data points. It’s a fundamental difference: deep learning AIs are generated from the top down, knowing everything it needs to from the get-go, while the human mind is built from the ground up with previous lessons learned being applied to subsequent experiences to create new knowledge.

What’s more, the human mind is especially adept at performing relational reasoning, which relies on logic to build connections between past experiences to help provide insight into new situations on the fly. Statistical AI (ie machine learning) is capable of mimicking the brain’s pattern recognition skills but is garbage at applying logic. Symbolic AI, on the other hand, can leverage…

IBM is teaching AI to behave more like the human brain