In this episode of the Talking Papers Podcast, I hosted Despoina Paschalidou to chat about her paper “Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks”, published in CVPR 2021. Neural Parts learns to parse geometrically accurate and semantically consistent part arrangements without any part-level supervision. Despoina is currently a postdoctoral researcher at the Geometric Computation Group at Stanford University. This work was done back when she was still a PhD student at Max Planck ETH Center for Learning Systems. Her unique perspective on interpretable 3D shapes representations makes her stand out in this domain where interpretability is often overlooked. Despoina is the first guest on the podcast that I did not personally know before the interview and she made the experience so pleasant and fun and it was a pleasure recording this episode with her.
AUTHORS
Despoina Paschalidou , Angelos Katharopoulos, Andreas Geiger, Sanja Fidler
ABSTRACT
Impressive progress in 3D shape extraction led to representations that can capture object geometries with high fidelity. In parallel, primitive-based methods seek to represent objects as semantically consistent part arrangements. However, due to the simplicity of existing primitive representations, these methods fail to accurately reconstruct 3D shapes using a small number of primitives/parts. We address the trade-off between reconstruction quality and number of parts with Neural Parts, a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN) which implements homeomorphic mappings between a sphere and the target object. The INN allows us to compute the inverse mapping of the homomorphism, which in turn, enables the efficient computation of both the implicit surface function of a primitive and its mesh, without any additional post-processing. Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision. Evaluations on ShapeNet, D-FAUST and FreiHAND demonstrate that our primitives can capture complex geometries and thus simultaneously achieve geometrically accurate as well as interpretable reconstructions using an order of magnitude fewer primitives than state-of-the-art shape abstraction methods.
RELATED PAPERS
📚 “KeypointDeformer: Unsupervised 3D Keypoint Discovery for Shape Control“
📚 “Learning Shape Abstractions by Assembling Volumetric Primitives”: Volumetric primitives“
📚 “Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids“
📚 “CvxNet: Learnable Convex Decomposition“
📚 “Neural Star Domain as Primitive Representation“
📚 “Learning Shape Templates with Structured Implicit Functions“
LINKS AND RESOURCES
💻 Project Page: https://paschalidoud.github.io/neural_parts
💻 CODE: https://github.com/paschalidoud/neural_parts
📚 Paper Link: “Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks”
This episode was recorded on April, 25th 2021.
CONTACT
If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
SUBSCRIBE AND FOLLOW
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: