In this episode of the Talking Papers Podcast, we hosted Amir Belder. We had a great chat about his paper “Random Walks for Adversarial Meshes”, published in SIGGRAPH 2022.
In this paper, they take on the task of creating an adversarial attack for triangle meshes. This is a non-trivial task since meshes are irregular. To solve the irregularity they use random walks instead of the raw mesh. On top of that, they trained an imitating network that mimics the predictions of the attacked network and used the gradients to perturb the input points.
Amir is currently a PhD student at the Computer Graphics and Multimedia Lab at the Technion Israel Institute of Technology. His research focus is on computer graphics and geometric processing and machine learning. We spend a lot of time together at the lab and chat often about science, papers and where the field is headed. Having this paper published was a great opportunity to share one of these conversations with you.
AUTHORS
Amir Belder, Gal Yefet, Ran Ben-Itzhak, Ayellet Tal
ABSTRACT
Neural implicit fields have recently emerged as a useful representation for 3D shapes. These fields are We A polygonal mesh is the most-commonly used representation of surfaces in computer graphics. Therefore, it is not surprising that a number of mesh classification networks have recently been proposed. However, while adversarial attacks are wildly researched in 2D, the field of adversarial meshes is under explored. This paper proposes a novel, unified, and general adversarial attack, which leads to misclassification of several state-of-the-art mesh classification neural networks. Our attack approach is black-box, i.e. it has access only to the network’s predictions, but not to the network’s full architecture or gradients. The key idea is to train a network to imitate a given classification network. This is done by utilizing random walks along the mesh surface, which gather geometric information. These walks provide insight onto the regions of the mesh that are important for the correct prediction of the given classification network. These mesh regions are then modified more than other regions in order to attack the network in a manner that is barely visible to the naked eye.
RELATED PAPERS
📚Explaining and Harnessing Adversarial Examples
📚
LINKS AND RESOURCES
📚 Paper
💻Code
To stay up to date with Amir’s latest research, follow him on:
👨🏻🎓Google Scholar
Recorded on November 23rd 2022.
CONTACT
If you would like to be a guest, sponsor or share your thoughts, feel free to reach out via email: talking.papers.podcast@gmail.com
SUBSCRIBE AND FOLLOW
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: