Assembling Volumetric Primitives

Paper

Learning Shape Abstractions by Assembling Volumetric Primitives by Tulsiani et al. (CVPR 2017)

Summary

  • The authors propose a network to learn shape abstractions by assembling cuboids. The network can predict up to M primitives, where each primitive is encoded by a tuple – {zm, qm, tm} i.e {shape, rotation, translation}
  • The network additionally predicts pm for each primitive which is the probability of occurrence. Low pm (<0.5) implies that the primitive is not considered in the final prediction.
  • There are two main losses at play:
    • Coverage Loss – enforces object O to be subsumed by the predicted assembled shape
    • Consistency Loss -enforces primitive P to be subsumed by the object O
    • There is an additional loss that enforces parsimony i.e. the number of predicted primitives should be low
  • Dataset: ShapeNet – chairs, aeroplanes, animals
  • Results: Coverage and consistency losses are reported. No tables or comparisons.
  • Applications
    • Correspondence – part based correspondences can be obtained for different objects of the same category
    • Image based abstraction – another network is trained to imitate the trained one’s predictions for image inputs
    • Shape Manipulation – render one mesh to have the shape of another
  •  Drawbacks: Only cuboidal primitives are trained for. Future extension could involve cylinders, spheres, cones, etc.

Approach

primitives_1Results

primitives_2

 

Advertisements
%d bloggers like this: