Skip to yearly menu bar Skip to main content


Poster

DiffBIR: Toward Blind Image Restoration with Generative Diffusion Prior

Xinqi Lin · Jingwen He · Ziyan Chen · Zhaoyang Lyu · Bo Dai · Fanghua Yu · Yu Qiao · Wanli Ouyang · Chao Dong

[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We present DiffBIR, a two-stage restoration pipeline that handles blind image restoration tasks in a unified framework. In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results. For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details. Specifically, IRControlNet is trained based on specially produced condition images without distracting noisy content for stable generation performance. Moreover, we design a region-adaptive restoration guidance that can modify the denoising process during inference without model re-training, allowing users to balance realness and fidelity through a tunable guidance scale. Extensive experiments have demonstrated DiffBIR's superiority over state-of-the-art approaches for blind image super-resolution, blind face restoration and blind image denoising tasks on both synthetic and real-world datasets.

Live content is unavailable. Log in and register to view live content