Skip to yearly menu bar Skip to main content


Poster

Physically Plausible Color Correction for Neural Radiance Fields

Qi Zhang · Ying Feng · HONGDONG LI

# 340
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Neural Radiance Fields have become the representation of choice for many 3D computer vision and graphics applications, e.g., novel view synthesis and 3D reconstruction. Multi-camera systems are commonly used as the image capture setup for NeRF-based multi-view tasks such as dynamic scene acquisition or realistic avatar animation. However, a critical issue that has often been overlooked in this setup is the evident differences in color responses among multiple cameras, which adversely affect the NeRF reconstruction performance. These color discrepancies among multiple input images stem from two aspects: 1) implicit properties of the scenes such as reflections and shadings, and 2) external differences in camera settings and lighting conditions. In this paper, we address this problem by proposing a novel color correction module that simulates the physical color processing in cameras to be embedded in NeRF, enabling the unified color NeRF reconstruction. Besides the view-independent color correction module for external differences, we predict a view-dependent function to minimize the color residual (including, e.g., specular and shading) to eliminate the impact of inherent attributes. We further describe how the method can be extended with a reference image as guidance to achieve aesthetically plausible color consistency and color translation on novel views. Experiments validate that our method is superior to baseline methods in both quantitative and qualitative evaluations of color correction and color consistency.

Live content is unavailable. Log in and register to view live content