Abstract
Neural Radiance Fields (NeRF) has achieved unprece-dented view synthesis quality using coordinate-based neu-ral scene representations. However, NeRF's view depen-dency can only handle simple reflections like highlights but cannot deal with complex reflections such as those from glass and mirrors. In these scenarios, NeRF models the virtual image as real geometries which leads to inaccurate depth estimation, and produces blurry renderings when the multi-view consistency is violated as the reflected objects may only be seen under some of the viewpoints. To over-come these issues, we introduce NeRFReN, which is built upon NeRF to model scenes with reflections. Specifically, we propose to split a scene into transmitted and reflected components, and model the two components with separate neural radiance fields. Considering that this decomposition is highly under-constrained, we exploit geometric priors and apply carefully-designed training strategies to achieve reasonable decomposition results. Experiments on various self-captured scenes show that our method achieves high-quality novel view synthesis and physically sound depth es-timation results while enabling scene editing applications.
Original language | English |
---|---|
Title of host publication | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
Publisher | IEEE |
Pages | 18388-18397 |
Number of pages | 10 |
ISBN (Electronic) | 9781665469463 |
ISBN (Print) | 9781665469470 |
DOIs | |
Publication status | Published - 27 Sept 2022 |
Event | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022 - New Orleans, United States Duration: 19 Jun 2022 → 24 Jun 2022 |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
Volume | 2022-June |
ISSN (Print) | 1063-6919 |
ISSN (Electronic) | 2575-7075 |
Conference
Conference | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022 |
---|---|
Country/Territory | United States |
City | New Orleans |
Period | 19/06/22 → 24/06/22 |
Bibliographical note
Funding Information:Acknowledgements. This work was supported by the National Natural Science Foundation of China (Project Number 62132012), Research Grant of Beijing Higher Institution Engineering Research Center, and Tsinghua–Tencent Joint Laboratory for Internet Innovation Technology.
Publisher Copyright:
© 2022 IEEE.
Keywords
- 3D from multi-view and sensors
- Image and video synthesis and generation
- Physics-based vision and shape-from-X
- Scene analysis and understanding
- Vision + graphics
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition