IBM is teaching AI to behave more like the human brain

Can a machine make memories? How IBM is exploring neural network learning in #AI:  @engadget

  • Deep learning neural networks — the likes of which power AlphaGo as well as the current generation of image recognition and language translation systems — are the best machine learning systems we’ve developed to date.
  • While neurons use their various connections with each other to recognize patterns, “We are explicitly forcing the network to discover the relationships that exist” between pairs of objects in a given scenario, Timothy Lillicrap, a computer scientist at DeepMind told Science Magazine.When subsequently tasked in June with answering complex questions…
  • In a pair of research papers presented at the 2017 International Joint Conference on Artificial Intelligence held in Melbourne, Australia last week, IBM submitted two studies: one looking into how to grant AI an “attention span”, the other examining how to apply the biological process of neurogenesis — that is,…
  • It’s the same way that your doctor doesn’t tap your knees with that weird little hammer thing when you come in complaining of chest pain and shortness of breath.While the attention system is handy for ensuring that the network stays on task, IBM’s work into neural plasticity (how well memories…
  • Basically the attention model will cover the short term, active, thought process while the memory portion will enable the network to streamline its function depending on the current situation.But don’t expect to see AIs rivalling the depth of human consciousness anytime soon, Rish warns.

Since the days of Da Vinci’s “Ornithoper”, mankind’s greatest minds have sought inspiration from the natural world for their technological creations. It’s no di…
Continue reading “IBM is teaching AI to behave more like the human brain”

IBM is teaching AI to behave more like the human brain

Can a machine make memories? How IBM is exploring neural network learning in #AI:  @engadget

  • Deep learning neural networks — the likes of which power AlphaGo as well as the current generation of image recognition and language translation systems — are the best machine learning systems we’ve developed to date.
  • While neurons use their various connections with each other to recognize patterns, “We are explicitly forcing the network to discover the relationships that exist” between pairs of objects in a given scenario, Timothy Lillicrap, a computer scientist at DeepMind told Science Magazine.When subsequently tasked in June with answering complex questions…
  • In a pair of research papers presented at the 2017 International Joint Conference on Artificial Intelligence held in Melbourne, Australia last week, IBM submitted two studies: one looking into how to grant AI an “attention span”, the other examining how to apply the biological process of neurogenesis — that is,…
  • It’s the same way that your doctor doesn’t tap your knees with that weird little hammer thing when you come in complaining of chest pain and shortness of breath.While the attention system is handy for ensuring that the network stays on task, IBM’s work into neural plasticity (how well memories…
  • Basically the attention model will cover the short term, active, thought process while the memory portion will enable the network to streamline its function depending on the current situation.But don’t expect to see AIs rivalling the depth of human consciousness anytime soon, Rish warns.

Since the days of Da Vinci’s “Ornithoper”, mankind’s greatest minds have sought inspiration from the natural world for their technological creations. It’s no di…
Continue reading “IBM is teaching AI to behave more like the human brain”

New neural-network algorithm learns directly from human instructions instead of examples

New neural-network algorithm learns directly from human instructions instead of examples  #ai

  • News New neural-network algorithm learns directly from human instructions instead of examples
  • For example, you could train a neural network to identify sky in a photograph by showing it hundreds of pictures with the sky labeled.
  • Abstract of Hair Segmentation Using Heuristically-Trained Neural Networks
  • Humans conventionally “teach” neural networks by providing a set of labeled data and asking the neural network to make decisions based on the samples it’s seen.
  • Applying the method to the binary classification of hair versus nonhair patches, we obtain a 2.2% performance increase using the heuristically trained NN over the current state-of-the-art hair segmentation method.

Conventional neural-network image-recognition algorithm trained to recognize human hair (left), compared to the more precise heuristically trained algorithm
Continue reading “New neural-network algorithm learns directly from human instructions instead of examples”

Fun LoL to Teach Machines How to Learn More Efficiently

Fun LoL brings rigor to quest for the ultimate learning machine.  #math #AI #machinelearning

  • The objective of Fun LoL is to investigate and characterize fundamental limits of machine learning with supportive theoretical foundations to enable the design of systems that learn more efficiently.
  • DARPA seeks mathematical framework to characterize fundamental limits of learning
  • To find answers to these questions, DARPA recently announced its Fundamental Limits of Learning (Fun LoL) program.
  • The goal of Fun LoL is to achieve a similar mathematical breakthrough for machine learning and AI.”
  • If you slightly tweak a few rules of the game Go, for example, the machine won’t be able to generalize from what it already knows.

Read the full article, click here.


@DARPA: “Fun LoL brings rigor to quest for the ultimate learning machine. #math #AI #machinelearning”


It’s not easy to put the intelligence in artificial intelligence. Current machine learning techniques generally rely on huge amounts of training data, vast computational resources, and a time-consuming trial and error methodology. Even then, the process typically results in learned concepts that aren’t easily generalized to solve related problems or that can’t be leveraged to learn more complex concepts. The process of advancing machine learning could no doubt go more efficiently—but how much so? To date, very little is known about the limits of what could be achieved for a given learning problem or even how such limits might be determined. To find answers to these questions, DARPA recently announced its Fundamental Limits of Learning (Fun LoL) program. The objective of Fun LoL is to investigate and characterize fundamental limits of machine learning with supportive theoretical foundations to enable the design of systems that learn more efficiently.


Fun LoL to Teach Machines How to Learn More Efficiently