Welcome back to the Talking Papers Podcast! In our latest episode, we had the privilege of hosting the brilliant Sadegh Aliakbarian to delve into his groundbreaking ICCV 2023 paper, “HMD-NeMo: Online 3D Avatar Motion Generation From Sparse Observations” . Sadegh will take us on a journey through this pivotal research that addresses a crucial aspect of immersive mixed reality experiences.
The quality of these experiences hinges on generating plausible and precise full-body avatar motion, a challenge given the limited input signals provided by Head-Mounted Devices (HMDs), typically head and hands 6-DoF. While recent approaches have made strides in generating full-body motion from such inputs, they assume full hand visibility. This assumption, however, doesn’t hold in scenarios without motion controllers, relying instead on egocentric hand tracking, which can lead to partial hand visibility due to the HMD’s field of view.
“HMD-NeMo” presents a groundbreaking solution, offering a unified approach to generating realistic full-body motion even when hands are only partially visible. This lightweight neural network operates in real-time, incorporating a spatio-temporal encoder with adaptable mask tokens, ensuring plausible motion in the absence of complete hand observations.
Sadegh is currently a senior research scientist at Microsoft Mixed Reality and AI Lab-Cambridge (UK), where he’s at the forefront of Microsoft Mesh and avatar motion generation. He holds a PhD from the Australian National University, where he specialized in generative modeling of human motion. His research journey includes internships at Amazon AI, Five AI, and Qualcomm AI Research, focusing on generative models, representation learning, and adversarial examples.
We first crossed paths during our time at the Australian Centre for Robotic Vision (ACRV), where Sadegh was pursuing his PhD, and I was embarking on my postdoctoral journey. During this time, I had the privilege of collaborating with another co-author of the paper, Fatemeh Saleh, who also happens to be Sadegh’s life partner. It’s been incredible to witness their continued growth.
🚀 Join us as we uncover the critical advancements brought by “HMD-NeMo” and their implications for the future of mixed reality experiences.
AUTHORS
Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Sadegh Aliakbarian, Fatemeh Saleh, David Collier, Pashmina Cameron, Darren Cosker
ABSTRACT
Generating both plausible and accurate full body avatar motion is the key to the quality of immersive experiences in mixed reality scenarios. Head-Mounted Devices (HMDs) typically only provide a few input signals, such as head and hands 6-DoF. Recently, different approaches achieved impressive performance in generating full body motion given only head and hands signal. However, to the best of our knowledge, all existing approaches rely on full hand visibility. While this is the case when, e.g., using motion controllers, a considerable proportion of mixed reality experiences do not involve motion controllers and instead rely on egocentric hand tracking. This introduces the challenge of partial hand visibility owing to the restricted field of view of the HMD. In this paper, we propose the first unified approach, HMD-NeMo, that addresses plausible and accurate full body motion generation even when the hands may be only partially visible. HMD-NeMo is a lightweight neural network that predicts the full body motion in an online and real-time fashion. At the heart of HMD-NeMo is the spatio-temporal encoder with novel temporally adaptable mask tokens that encourage plausible motion in the absence of hand observations. We perform extensive analysis of the impact of different components in HMD-NeMo and introduce a new state-of-the-art on AMASS dataset through our evaluation.
RELATED PAPERS
LINKS AND RESOURCES
📚 Paper
To stay up to date with their latest research, follow on:
👨🏻🎓Personal page
👨🏻🎓Google Scholar
Recorded on September 22nd 2023.
CONTACT
If you would like to be a guest, sponsor or share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
SUBSCRIBE AND FOLLOW
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: