Skip to yearly menu bar Skip to main content


Poster

Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable Repainting

Junwu Zhang · Zhenyu Tang · Yatian Pang · Xinhua Cheng · Peng Jin · Yida Wei · xing zhou · munan ning · Li Yuan

[ ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Recent image-to-3D methods achieve impressive results with plausible 3D geometry due to the development of diffusion models and optimization techniques. However, existing image-to-3D methods suffer from texture deficiencies in novel views, including multi-view inconsistency and quality degradation. To alleviate multi-view bias and enhance image quality in novel-view textures, we present Repaint123, a fast image-to-3d approach for creating high-quality 3D content with detailed textures. Repaint123 proposes a progressively repainting strategy to simultaneously enhance the consistency and the quality of textures across different views, generating invisible regions according to visible textures, with the visibility map calculated by the depth alignment across views. Furthermore, multiple control techniques including reference-driven information injection and coarse-based depth guidance are introduced to alleviate the texture bias accumulated during the repainting process for improved consistency and quality. Extensive experiments demonstrate the superior ability of our method in creating 3D content with consistent and detailed textures in 2 minutes.

Live content is unavailable. Log in and register to view live content