[1606.04671v2] Progressive Neural Networks

New ways to pretrain/finetune via Progressive Neural Networks  #deepmind #deeplearning

  • Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
  • Abstract: Learning to solve complex sequences of tasks–while both leveraging transfer and avoiding catastrophic forgetting–remains a key obstacle to achieving human-level intelligence.
  • (Submitted on 15 Jun 2016 ( v1 ), last revised 21 Jun 2016 (this version, v2))
  • From: Andrei Rusu [ view email ] [v1] Wed, 15 Jun 2016 08:20:51 GMT (2325kb,D) [v2] Tue, 21 Jun 2016 22:03:05 GMT (7786kb,D)
  • The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features.

Read the full article, click here.


@quantombone: “New ways to pretrain/finetune via Progressive Neural Networks #deepmind #deeplearning”



[1606.04671v2] Progressive Neural Networks