BACON: Band-Limited Coordinate Networks for Multiscale Scene Representation

In this episode of the Talking Papers Podcast, I hosted David B. Lindell to chat about his paper “BACON: Band-Limited Coordinate Networks for Multiscale Scene Representation”, published in CVPR 2022.

In this paper, they tackled the question of how to train a coordinate network? They do this by introducing a new type of neural network architecture that has an analytical Fourier spectrum. This allows them to do things like multi-scale signal representation, and, it gives an interpretable architecture, with an explicitly controllable bandwidth.

David recently completed his Postdoc at Stanford and will join the University of Toronto as an Assistant Professor. During our chat, I got to know a stellar academic with a unique view of the field and where it is going. We even got to meet in person at CVPR. I am looking forward to seeing what he comes up with next. It was a pleasure having him on the podcast.


David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein



Neural implicit fields have recently emerged as a useful representation for 3D shapes. These fields are Coordinate-based networks have emerged as a powerful tool for 3D representation and scene reconstruction. These networks are trained to map continuous input coordinates to the value of a signal at each point. Still, current architectures are black boxes: their spectral characteristics cannot be easily analyzed, and their behavior at unsupervised points is difficult to predict. Moreover, these networks are typically trained to represent a signal at a single scale, so naive downsampling or upsampling results in artifacts. We introduce band-limited coordinate networks (BACON), a network architecture with an analytical Fourier spectrum. BACON has constrained behavior at unsupervised points, can be designed based on the spectral characteristics of the represented signal, and can represent signals at multiple scales without per-scale supervision. We demonstrate BACON for multiscale neural representation of images, radiance fields, and 3D scenes using signed distance functions and show that it outperforms conventional single-scale coordinate networks in terms of interpretability and quality.



📚Fourier Features Networks (FFN)

📚Multiplicative Filter Networks (MFN)


📚Followup work: Residual MFN


💻Project website

📚 Paper



To stay up to date with David’s latest research, follow him on:

👨🏻‍🎓Personal Page


👨🏻‍🎓Google Scholar


Recorded on June 15th 2022.


If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email:


🎧Subscribe on your favourite podcast app:

📧Subscribe to our mailing list:

🐦Follow us on Twitter:

🎥YouTube Channel: