Neural graph primitives (NGPs) show promise in enabling the seamless integration of new and old assets in various applications. They represent images, shapes, volumetric and spatial directional data, helping in novel view synthesis (NeRF), generative modeling, light caching and various other applications. Particularly successful are primitives that represent data via a feature grid containing trained latent embeddings, subsequently decoded by a multilayer perceptron (MLP).
Researchers from NVIDIA and the University of Toronto propose Compact PNG, a machine learning framework that combines the speed associated with hash tables and the efficiency of index learning by utilizing the latter for collision detection using learned polling methods. This combination is achieved by unifying all feature grids into a shared framework where they function as indexing functions that map to a table of feature vectors.
Compact NGP has been designed specifically with content delivery in mind, with the goal of amortizing compression overhead. Its design ensures that decoding on the user's equipment remains low-cost, low-power, and multi-scale, enabling graceful degradation in bandwidth-limited environments.
These data structures can be fused in innovative ways using basic arithmetic combinations of their indices, resulting in state-of-the-art compression versus quality tradeoffs. In mathematical terms, these arithmetic combinations involve mapping the different data structures to subsets of bits within the indexing function, which significantly reduces the cost of learned indexing, which otherwise increases exponentially with the number of bits.
Their approach inherits the speed advantages of hash tables while achieving significantly improved compression, approaching levels comparable to JPEG in image rendering. It preserves differentiability and does not rely on a dedicated decompression scheme like an entropy code. Compact NGP demonstrates versatility in various user-controllable compression rates and offers streaming capabilities, allowing partial results to be uploaded, especially in bandwidth-limited environments.
They performed an evaluation of NeRF compression in synthetic and real-world scenes, comparing it to several contemporary NeRF compression techniques based primarily on TensoRF. Specifically, they employed masked waves as a solid, fresh foundation for the real-world scene. In both scenarios, Compact NGP demonstrates superior performance compared to Instant NGP in terms of the balance between quality and size.
The Compact NGP design has been adapted to real-world applications where random access decompression, streaming granularity, and high performance play critical roles in both the training and inference stages. Consequently, there is an enthusiasm to explore its potential applications in various domains such as streaming applications, video game texture compression, live training, and many other areas.
Review the Paper. All credit for this research goes to the researchers of this project. Also, don't forget to join. our SubReddit of more than 35,000 ml, 41k+ Facebook community, Discord channel, LinkedIn Graboveand Electronic newsletterwhere we share the latest news on ai research, interesting ai projects and more.
If you like our work, you'll love our newsletter.
Arshad is an intern at MarktechPost. He is currently pursuing his international career. Master's degree in Physics from the Indian Institute of technology Kharagpur. Understanding things down to the fundamental level leads to new discoveries that lead to the advancement of technology. He is passionate about understanding nature fundamentally with the help of tools such as mathematical models, machine learning models, and artificial intelligence.
<!– ai CONTENT END 2 –>