INR2Vec: Deep Learning on Implicit Neural Representations of Shapes

In this episode of the Talking Papers Podcast, I hosted Luca De Luigi. We had a great chat about his paper “Deep Learning on Implicit Neural Representations of Shapes”, AKA INR2Vec published in ICLR 2023.

In this paper, they take implicit neural representations to the next level and use them as input signals for neural networks to solve multiple downstream tasks. The core idea was captured by one of the authors in a very catchy and concise tweet: “Signals are networks so networks are data and so networks can process other networks to understand and generate signals”.

Luca recently received his PhD from the University of Bolognia and is currently working at a startup based in Bolognia eyecan.ai. His research focus is on neural representations of signals, especially for 3D geometry. To be honest, I knew I wanted to get Luca on the podcast the second I saw the paper on arXiv because I was working on a related topic but had to shelf it due to time management issues. This paper got me excited about that topic again. I didn’t know Luca before recording the episode and it was a delight to get to know him and his work.

AUTHORS

Luca De Luigi, Adriano Cardace, Riccardo Spezialetti, Pierluigi Zama Ramirez, Samuele Salti, Luigi Di Stefano


ABSTRACT

 

Abstraction is at the heart of sketching due to the simple and minimal nature of line drawings. Abstraction entails identifying the essential visual properties of an object or scene, which requires Implicit Neural Representations (INRs) have emerged in the last few years as a powerful tool to encode continuously a variety of different signals like images, videos, audio and 3D shapes. When applied to 3D shapes, INRs allow to overcome the fragmentation and shortcomings of the popular discrete representations used so far. Yet, considering that INRs consist in neural networks, it is not clear whether and how it may be possible to feed them into deep learning pipelines aimed at solving a downstream task. In this paper, we put forward this research problem and propose inr2vec, a framework that can compute a compact latent representation for an input INR in a single inference pass. We verify that inr2vec can embed effectively the 3D shapes represented by the input INRs and show how the produced embeddings can be fed into deep learning pipelines to solve several tasks by processing exclusively INRs.

 

📚

📚

📚PointNet

LINKS AND RESOURCES

📚 Paper

💻Project page

To stay up to date with Luca’s latest research, follow her on:

👨🏻‍🎓Google Scholar

👨🏻‍🎓LinkedIn

Recorded on March 22 2023.

SPONSOR

This episode was sponsored by YOOM. YOOM is an Israeli startup dedicated to volumetric video creation. They were voted as the 2022 best start-up to work for by Dun’s 100.
Join their team that works on geometric deep learning research, implicit representations of 3D humans, NeRFs, and 3D/4D generative models.


Visit YOOM.com.


CONTACT

If you would like to be a guest, sponsor or share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com

SUBSCRIBE AND FOLLOW

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: