Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly advanced novel view synthesis, which is capable of photo-realistic rendering. However, these methods require the foundational assumption of the static scene (e.g., consistent lighting condition and persistent object positions), which is often violated in real-world scenarios. In this study, we introduce MemE, an unsupervised plug-and-play module, to achieve high-quality novel view synthesis in noisy input scenarios. MemE leverages the inherent property in parameter optimization, known as the memorization effect to achieve distractor filtering and can be easily combined with NeRF or 3DGS. Furthermore, MemE is applicable in environments both with and without distractors, significantly enhancing the adaptability of NeRF and 3DGS across diverse input scenarios. Extensive experiments show that our methods (i.e., MemE-NeRF and MemE-3DGS) achieve state-of-the-art performance on both real and synthetic noisy scenes.
Live content is unavailable. Log in and register to view live content