Constructive Solid Geometry on Neural Signed Distance Fields

In our latest episode of the Talking Papers Podcast, I had the pleasure of hosting Zoë Marschner, a first-year PhD student at Carnegie Mellon University. We delved into her research paper titled “Constructive Solid Geometry on Neural Signed Distance Fields,” which was published in SIGGRAPH Asia 2023. The topic revolves around the challenge of editing shapes encoded by neural SDFs, a popular geometric representation.

Zoë and her co-authors tackled the issue of incorrect non-Signed Distance Fields (SDFs) outputs resulting from common geometric operations. These outputs, referred to as Pseudo-SDFs, hinder their usability for downstream tasks. To address this, they characterized the space of Pseudo-SDFs and introduced the closest point loss, a novel regularizer that ensures an exact SDF as the output. This regularization technique has wide applicability in several operations, such as CSG and swept volumes.

As a former mechanical engineer, I find the combination of Constructive Solid Geometry (CSG) and Neural Signed Distance Fields intriguing, especially considering its potential in computer-aided design (CAD) applications. The seamless integration of traditional methods with the power of neural networks opens up new possibilities in shape editing and content creation.

Recording the episode with Zoë was a delightful experience, even though we haven’t had the chance to meet in person. It amazes me how early in her career she has already worked with some of the most esteemed individuals in the field. It was truly inspiring to hear her insights and dedication to driving innovation through academic research. I am excited about Zoë’s future research and the impact it will have on the field of Constructive Solid Geometry and Neural Signed Distance Fields.

AUTHORS


Zoë Marschner, Silvia Sellán, Hsueh-Ti Derek Liu, Alec Jacobson

ABSTRACT


Signed Distance Fields (SDFs) parameterized by neural networks have recently gained popularity as a fundamental geometric representation. However, editing the shape encoded by a neural SDF remains an open challenge. A tempting approach is to leverage common geometric operators (e.g., boolean operations), but such edits often lead to incorrect non-SDF outputs (which we call Pseudo-SDFs), preventing them from being used for downstream tasks. In this paper, we characterize the space of Pseudo-SDFs, which are eikonal yet not true distance functions, and derive the closest point loss, a novel regularizer that encourages the output to be an exact SDF. We demonstrate the applicability of our regularization to many operations in which traditional methods cause a Pseudo-SDF to arise, such as CSG and swept volumes, and produce a true (neural) SDF for the result of these operations.

RELATED WORKS

📚DeepSDF

📚Swept Volumes via Spacetime Numerical Continuation

📚Inigo Quilez blog

LINKS AND RESOURCES

📚Paper

💻Project page

To stay up to date with herlatest research, follow on:

👨🏻‍🎓Personal website

👨🏻‍🎓Google scholar

👨🏻‍🎓LinkedIn

This episode was recorded on October 24th 2023

CONTACT


If you would like to be a guest, sponsor or share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com

SUBSCRIBE AND FOLLOW


🎧Subscribe on your favourite podcast app

📧Subscribe to our mailing list

🐦Follow us on Twitter

🎥Subscribe to our