For decades, we have envisioned a digital world where we can experience the physical world in all its three-dimensional glory, but until recently, achieving it has been a significant challenge. While we have been able to communicate with others through calls, videos, and photos, these experiences have been limited to a 2D representation of reality. We’ve always wanted more: the ability to see people, objects and places in 3D, to immerse ourselves in the world around us. However, the accurate reconstruction of 3D scenes and objects has been a complex and challenging task, requiring significant advances in computational technology and methods.
Accurate reconstruction of 3D scenes and objects is a crucial problem in various fields such as robotics, photogrammetry, AR/VR, etc. Recently, Neural Radiation Fields (NeRF) have been the de facto solution for 3D scene reconstruction. They can synthesize novel views quite accurately using a 3D representation where each location in space can emit glow. The impressive results of NeRF have drawn attention in the literature and there have been numerous attempts to improve its performance.
Most of the works have focused on improving NeRF in terms of image quality, robustness, training speed, and rendering speed. However, there is a problem with these jobs; almost all of them are focused on optimizing NeRF for the novel view synthesis (NVS) task. So we can’t use them to get accurate 3D meshes from radiation fields, and that’s why we can’t directly integrate NeRF with most computer graphics pipelines.
What if we want to extract geometrically accurate meshes from NeRF so that we can integrate them into computer graphics pipelines? How can we extract accurate 3D meshes from NeRF? time to meet NeRF Mesh.
NeRF Mesh is designed to extract geometrically accurate meshes from NeRF-based trained networks efficiently. You can produce 3D meshes with precise geometry that can be rendered in real time on commodity hardware.
NeRF Mesh it builds on top of trained NeRF networks by introducing a new structure called the signed surface approach network (SSAN). SSAN acts as a post-processing pipeline that determines the surface and appearance of a NeRF render. Generates an accurate 3D triangular mesh of the scene and uses a small appearance grid to generate view-dependent colors. NeRF Mesh it is compatible with any NeRF and allows easy integration of new developments, such as better handling of unlimited scenes or reflective objects.
SSAN computes a signed truncated distance field (TSDF) and a feature appearance field. Using the estimated NeRF geometry and training views, the trained NeRF is distilled into the SSAN model. The 3D mesh is then extracted from the SSAN and can be rendered on embedded devices using rasterization and appearance grid at a high frame rate. This method is very flexible, allowing for rapid 3D mesh generation that is not limited to object-centric scenes and can even model complex surfaces.
NeRF Mesh is a novel method to capture accurate 3D meshes from NeRF. It can be integrated into any existing NeRF network, allowing advances in NeRF to be used. With this breakthrough, we are now able to extract accurate 3D meshes from NeRF, which can be used in various fields such as AR/VR, robotics, and photogrammetry.
review the Paper. Don’t forget to join our 19k+ ML SubReddit, discord channel, and electronic newsletter, where we share the latest AI research news, exciting AI projects, and more. If you have any questions about the article above or if we missed anything, feel free to email us at [email protected]
🚀 Check out 100 AI tools at AI Tools Club
Ekrem Çetinkaya received his B.Sc. in 2018 and M.Sc. in 2019 from Ozyegin University, Istanbul, Türkiye. She wrote her M.Sc. thesis on denoising images using deep convolutional networks. She is currently pursuing a PhD. She graduated from the University of Klagenfurt, Austria, and working as a researcher in the ATHENA project. Her research interests include deep learning, computer vision, and multimedia networks.