Skip to yearly menu bar Skip to main content


Poster

FreeDiff: Progressive Frequency Truncation for Image Editing with Diffusion Models

Wei WU · Qingnan Fan · Shuai Qin · Hong Gu · Ruoyu Zhao · Antoni Chan

[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Precise image editing with text-to-image models has attracted increasing interest due to their remarkable generative capabilities and user-friendly nature. However, such attempts face the pivotal challenge of misalignment between the intended precise editing target regions and the broader area impacted by the guidance in practice. Despite excellent methods leveraging attention mechanisms that have been developed to refine the editing guidance, these approaches necessitate modifications through complex network architecture and are limited to specific editing tasks. In this work, we re-examine the diffusion process and misalignment problem from a frequency perspective, revealing that, due to the power law of natural images and the decaying noise schedule, the denoising network primarily recovers low-frequency image components during the earlier timesteps and brings excessive low-frequency signals for editing. Leveraging this insight, we introduce a novel fine-tuning free approach that employs progressive \textbf{Fre}qu\textbf{e}ncy truncation to refine the guidance of \textbf{Diff}usion models for universal editing tasks (\textbf{FreeDiff}). Our method achieves comparable results with state-of-the-art methods across a variety of editing tasks and on a diverse set of images, highlighting its potential as a versatile tool in image editing applications.

Live content is unavailable. Log in and register to view live content