Skip to yearly menu bar Skip to main content


Poster

Source Prompt Disentangled Inversion for Boosting Image Editability with Diffusion Models

Ruibin Li · Ruihuang Li · Song Guo · Lei Zhang

[ ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Text-driven diffusion models have significantly advanced image editing performance by using text prompts as inputs. One crucial step in text-driven image editing is to invert the original image into a latent noise code conditioned on the source prompt. While previous methods have achieved promising results by refactoring the image synthesizing process, the inverted latent noise code is tightly coupled with the source prompt, limiting the image editability by target text prompts. To address this issue, we propose a novel method called Source Prompt Disentangled Inversion (SPDInv). It aims at reducing the impact of source prompt, thereby enhancing the text-driven image editing performance by employing diffusion models. To make the inverted noise code independent of the given source prompt as much as possible, we indicate that the iterative inversion process should satisfy a fixed-point constraint. Consequently, we transform the inversion problem into a searching problem to find the fixed-point solution, and utilize the pre-trained diffusion models to facilitate the searching process. The experimental results show that our proposed SPDInv method can effectively mitigate the conflicts between the target editing prompt and the source prompt, leading to a significant reduction in editing artifacts. Furthermore, in addition to text-driven image editing, with SPDInv we can easily adapt the customized image generation methods to localized editing tasks with promising performance.

Live content is unavailable. Log in and register to view live content