MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices

In this episode of the Talking Papers Podcast, I hosted Kejie Li to chat about his CVPR 2023 paper “MobileBrick: Building LEGO for 3D Reconstruction on Mobile Devices”.

In this paper, they proposed a new dataset and paradigm for evaluating 3D object reconstruction. It is very difficult to create a digital twin of 3D objects, even with expensive sensors. They introduce a new RGBD dataset, captured from a mobile device. The nice trick to obtaining the ground truth is that they used LEGO bricks that have an exact CAD model. There are two very interesting finds here. First, NeRF and NeuS work great and second, you shouldn’t use low-quality depth if you have high-resolution RGB.

Kejie is currently a research scientist at ByteDance/ TikTok. When writing this paper he was a postdoc at Kejie is currently a research scientist at ByteDance/ TikTok. When writing this paper he was a postdoc at Oxford, working with Professor Philip Torr and Professor Victor Prisacariu. Prior to this, he successfully obtained his PhD from the University of Adelaide, under the guidance of Professor Ian Reid. Although we hadn’t crossed paths until this episode, we both have some common ground in our CVs, having been affiliated with different nodes of the ACRV (Adelaide for him and ANU for me). I’m excited to see what he comes up with next, and eagerly await his future endeavours.

AUTHORS

Kejie Li, Jia-Wang Bian, Robert Castle, Philip H.S. Torr, Victor Adrian Prisacariu

ABSTRACT

 

High-quality 3D ground-truth shapes are critical for 3D object reconstruction evaluation. However, it is difficult to create a replica of an object in reality, and even 3D reconstructions generated by 3D scanners have artefacts that cause biases in evaluation. To address this issue, we introduce a novel multi-view RGBD dataset captured using a mobile device, which includes highly precise 3D ground-truth annotations for 153 object models featuring a diverse set of 3D structures. We obtain precise 3D ground-truth shape without relying on high-end 3D scanners by utilising LEGO models with known geometry as the 3D structures for image capture. The distinct data modality offered by high-resolution RGB images and low-resolution depth maps captured on a mobile device, when combined with precise 3D geometry annotations, presents a unique opportunity for future research on high-fidelity 3D reconstruction. Furthermore, we evaluate a range of 3D reconstruction algorithms on the proposed dataset.

 

📚COLMAP

📚NeRF

📚NeuS

📚CO3D

LINKS AND RESOURCES

📚 Paper

💻Project page

💻Code

To stay up to date with Jiahao’s latest research, follow him on:

👨🏻‍🎓Personal page

👨🏻‍🎓Google Scholar

🐦Twitter

Recorded on May 8th 2023.

SPONSOR

This episode was sponsored by YOOM. YOOM is an Israeli startup dedicated to volumetric video creation. They were voted as the 2022 best start-up to work for by Dun’s 100.
Join their team that works on geometric deep learning research, implicit representations of 3D humans, NeRFs, and 3D/4D generative models.


Visit YOOM.com.


CONTACT

If you would like to be a guest, sponsor or share your thoughts, feel free to reach out via email: talking (dor) papers (dot) podcast(at) gmail (dot) com

SUBSCRIBE AND FOLLOW

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: