We present a novel framework for performing novel-view synthesis on human tourist photos. Given a tourist photo from a known scene, we reconstruct the photo in 3D space through modeling the human and the background independently. We generate a deep buffer from a novel viewpoint of the reconstruction and utilize a deep network to translate the buffer into a photo-realistic rendering of the novel view. We additionally present a method to relight the renderings, allowing for relighting of both human and background to match either the provided input image or any other. The key contributions of our paper are: 1) a framework for performing novel view synthesis on human tourist photos, 2) an appearance transfer method for relighting of humans to match synthesized backgrounds, and 3) a method for estimating lighting properties from a single human photo. We demonstrate the proposed framework on photos from two different scenes of various tourists.
|Name||Proceedings. IEEE Workshop on Applications of Computer Vision|
|Conference||Winter Conference on Applications of Computer Vision|
|Abbreviated title||WACV 2022|
|Period||4/01/22 → 8/01/22|
- Computational Photography
- Image and Video Synthesis Vision for Graphics