Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
CVPR 2023

Ziyu Wan1     Christian Richardt2     Aljaž Božič2     Chao Li2     Vijay Rengarajan2     Seonghyeon Nam2    
Xiaoyu Xiang2     Tuotuo Li2     Bo Zhu2     Rakesh Ranjan2     Jing Liao1    
City University of Hong Kong1           Meta Reality Labs2          

Abstract

Neural radiance fields (NeRFs) enable novel view synthesis with unprecedented visual quality. However, to render photorealistic images, NeRFs require hundreds of deep multilayer perceptron (MLP) evaluations – for each pixel. This is prohibitively expensive and makes real-time rendering infeasible, even on powerful modern GPUs. In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline. We represent scenes as neural radiance features encoded on a two-layer duplex mesh, which effectively overcomes the inherent inaccuracies in 3D surface reconstruction by learning the aggregated radiance information from a reliable interval of ray-surface intersections. To exploit local geometric relationships of nearby pixels, we leverage screen-space convolutions instead of the MLPs used in NeRFs to achieve high-quality appearance. Finally, the performance of the whole framework is further boosted by a novel multi-view distillation optimization strategy. We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.


overview

Video

Real-Time Interactive Viewer Demos

Loading the scene may take several seconds.

Other Work on Real-Time NeRF

The progress on accelerating NeRF rendering is super rapid! There are some other cool methods published in CVPR 2023:

Recently, some more advanced papers have been released. Please also check out their amazing work.

Acknowledgements

We thank the anonymous reviewers for their constructive comments. We also appreciate helpful discussions with Feng Liu, Chakravarty R. Alla Chaitanya, Simon Green, Daniel Maskit, Aayush Bansal, Zhiqin Chen, Vasu Agrawal, Hao Tang, Michael Zollhoefer, Huan Wang, Ayush Saraf and Zhaoyang Lv. This work was partially supported by a GRF grant (Project No. CityU 11216122) from the Research Grants Council (RGC) of Hong Kong.

The website template was borrowed from Michaël Gharbi.

Citation