VLN BERT: A Recurrent Vision-and-Language BERT for Navigation

In this episode of the Talking Papers Podcast, I hosted Yicong Hong to chat about his paper “VLN BERT: A Recurrent Vision-and-Language BERT for Navigation”, published in CVPR 2021. In this paper, they take on the task of vision and language navigation (VLN) and propose a time-aware recurrent BERT model. The recurrent function maintains the cross-modal state information for the agent, enabling them to achieve state-of-the-art results. When I started my postdoc position at ANU, Yicong was in the first year of his PhD. Since then, it was a delight to see him grow as a researcher. One of the things I love most about his style is his relentlessness, he won’t let it go until he figures it out (reminds me of someone…). Yicong is a great early career researcher (soon to complete his PhD) and it was a pleasure recording this episode with him.

 

AUTHORS

Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould

 

ABSTRACT

Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language (V&L) BERT. However, its application for the task of vision-and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper, we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.

 

📚 Attention is All You Need

📚 Towards learning a generic agent for vision-and-language navigation via pre-training

💻 Project Page and CODE: https://github.com/YicongHong/Recurrent-VLN-BERT

📚 Paper

 

This episode was recorded on April, 16th 2021.

 

CONTACT

If you would like to be a guest, sponsor or just share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com


SUBSCRIBE AND FOLLOW

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: