NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling
ACM Multimedia 2022


We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs. Our method is built upon Neural Radiance Fields (NeRF) \cite{mildenhall2020nerf} that predicts per-point density and color with a multi-layer perceptron. While producing images at arbitrary scales, NeRF struggles with resolutions that go beyond observed images. Our key insight is that NeRF benefits from 3D consistency, which means an observed pixel absorbs information from nearby views. We first exploit it by a super-sampling strategy that shoots multiple rays at each image pixel, which further enforces multi-view constraint at a sub-pixel level. Then, we show that NeRF-SR can further boost the performance of super-sampling by a refinement network that leverages the estimated depth at hand to hallucinate details from related patches on an HR reference image. Experiment results demonstrate that NeRF-SR generates high-quality results for novel view synthesis at HR on both synthetic and real-world datasets.




NeRF-SR find sub-pixel level correspondence through super-sampling, which means missing details in the input can be found from other views that lie in the neighboring region in 3D space. Vanilla NeRF and bicubic produces blurry results. NeRF-SR relies purely on the input images of the scene and doesn't require any external priors.

blender llff



The website template was borrowed from Michaƫl Gharbi and Ben Mildenhall.