MarrNet: 3D Shape Reconstruction via 2.5D Sketches

MarrNet: 3D Shape Reconstruction via 2.5D Sketches.  #deeplearning #nips #computervision

  • 3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes.
  • In this work, we propose an end-to-end trainable framework, sequentially estimating 2.5D sketches and 3D object shapes.
  • First, compared to full 3D shape, the 2.5D sketch is much easier to be recovered from the 2D image, and to transfer from synthetic to real images.
  • Second, for 3D reconstruction from the 2.5D sketches, we can easily transfer the learned model on synthetic data to real images, as rendered 2.5D sketches are invariant to object appearance variations in real images, including lighting, texture, etc.
  • Third, we derive differentiable projective functions from 3D shapes to 2.5D sketches, making the framework end-to-end trainable on real images, requiring no real-image annotations.

Continue reading “MarrNet: 3D Shape Reconstruction via 2.5D Sketches”