Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

Platform
PC
Tech
NeRF
Comments
石破天惊
Neural gigapixel images
Neural SDF
NeRF
Neural volume
We demonstrate near-instant training of neural graphics primitives on a single GPU for multiple tasks. In gigapixel image we represent an image by a neural network. SDF learns a signed distance function in 3D space whose zero level-set represents a 2D surface. NeRF [Mildenhall et al. 2020] uses 2D images and their camera poses to reconstruct a volumetric radiance-and-density field that is visualized using ray marching. Lastly, neural volume learns a denoised radiance and density field directly from a volumetric path tracer. In all tasks, our encoding and its efficient implementation provide clear benefits: instant training, high quality, and simplicity. Our encoding is task-agnostic: we use the same implementation and hyperparameters across all tasks and only vary the hash table size which trades off quality and performance. Girl With a Pearl Earring renovation ©Koorosh Orooj (CC BY-SA 4.0)

Abstract

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations. A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920x1080.

Results

Gigapixel Image

Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch—but converges so quickly you may miss it if you blink! Image ©Trevor Dobson (CC BY-NC-ND 2.0)

Neural Radiance Fields

(a) None
411k parameters10:45 (mm:ss)
(b) Multiresolution grid
10k + 16.3M parameters1:26 (mm:ss)
(c) Frequency
438k + 0 parameters13:53 (mm:ss)
(d) Hashtable (T=214)
10k + 494k parameters1:40 (mm:ss)
(e) Hashtable (T=219)
10k + 12.6M parameters1:45 (mm:ss)
A demonstration of the reconstruction quality of different encodings. Each configuration was trained for 11000 steps using our fast NeRF implementation, varying only the input encoding and the neural network size. The number of trainable parameters (neural network weights + encoding parameters) and training time are shown below each image. Our encoding (d) with a similar total number of trainable parameters as the frequency encoding (c) trains over 8 times faster, due to the sparsity of updates to the parameters and smaller neural network. Increasing the number of parameters (e) further improves approximation quality without significantly increasing training time.
Real-time training progress on eight synthetic NeRF datsets.Drums model ©bryanajones (CC BY 2.0), Lego model ©Håvard Dalen (CC BY-NC 2.0), Ship model ©gregzaal (CC BY-SA 2.0)
Fly-throughs of trained real-world NeRFs. Large, natural 360 scenes (left) as well as complex scenes with many disocclusions and specular surfaces (right) are well supported. Both models can be rendered in real time and were trained in under 5 minutes from casually captured data: the left one from an iPhone video and the right one from 34 photographs.
We also support training NeRF-like radiance fields from the noisy output of a volumetric path tracer. Rays are fed in real-time to the network during training, which learns a denoised radiance field. Cloud model ©Walt Disney Animation Studios (CC BY-SA 3.0)

Signed Distance Function

Real-time training progress on various SDF datsets. Training data is generated on the fly from the ground-truth mesh using the NVIDIA OptiX raytracing framework. Bearded Man model ©Oliver Laric (CC BY-NC-SA 3.0)

Neural Radiance Cache

Direct visualization of a neural radiance cache, in which the network predicts outgoing radiance at the first non-specular vertex of each pixel's path, and is trained on-line from rays generated by a real-time pathtracer. On the left, we show results using the triangle wave encoding of [Müller et al. 2021]; on the right, the new multiresolution hash encoding allows the network to learn much sharper details, for example in the shadow regions.

Paper

notion image
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller

Citation

@article{mueller2022instant, author = {Thomas M\"uller and Alex Evans and Christoph Schied and Alexander Keller}, title = {Instant Neural Graphics Primitives with a Multiresolution Hash Encoding}, journal = {ACM Trans. Graph.}, issue_date = {July 2022}, volume = {41}, number = {4}, month = jul, year = {2022}, pages = {102:1--102:15}, articleno = {102}, numpages = {15}, url = {https://doi.org/10.1145/3528223.3530127}, doi = {10.1145/3528223.3530127}, publisher = {ACM}, address = {New York, NY, USA} }

Acknowledgements

We would like to thank Anjul Patney, David Luebke, Jacob Munkberg, Jonathan Granskog, Jonathan Tremblay, Koki Nagano, Marco Salvi, Nikolaus Binder, James Lucas, and Towaki Takikawa for proof-reading, feedback, profound discussions, and early testing. We also thank Joey Litalien for providing us with the framework for this website.
Girl With a Pearl Earring renovation ©Koorosh Orooj (CC BY-SA 4.0) Tokyo gigapixel image ©Trevor Dobson (CC BY-NC-ND 2.0) Detailed Drum Set ©bryanajones (CC BY 2.0) Lego 856 Bulldozer ©Håvard Dalen (CC BY-NC 2.0) Suzanne's Revenge ship ©gregzaal (CC BY-SA 2.0) Lucy model from the Stanford 3D scan repository Factory robot dataset by Arman Toorians and Saurabh Jain Disney Cloud model ©Walt Disney Animation Studios (CC BY-SA 3.0) Bearded Man model ©Oliver Laric (CC BY-NC-SA 3.0)