Skip to yearly menu bar Skip to main content


Poster

Tuning-Free Image Customization with Image and Text Guidance

Pengzhi Li · Qiang Nie · Ying Chen · Xi Jiang · Kai Wu · Yuhuan Lin · Yong Liu · Jinlong Peng · Chengjie Wang · Feng Zheng

[ ] [ Project Page ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Despite significant advancements in image customization due to diffusion models, current methods still have several limitations: 1) unintended changes in non-target areas when regenerating the entire image; 2) guidance solely by a reference image or text descriptions; and 3) time-consuming fine-tuning, which limits their practical application. In response, we introduce a tuning-free framework for simultaneous text-image-guided image customization, enabling precise editing of specific image regions within seconds. Our approach preserves the semantic features of the reference image subject while allowing modification of detailed attributes based on text descriptions. To achieve this, we propose an innovative attention blending strategy that blends self-attention features in the UNet decoder during the denoising process. To our knowledge, this is the first tuning-free method that concurrently utilizes text and image guidance for specific region image customization. Our approach outperforms previous methods in both subjective and quantitative evaluations, providing an efficient solution for various practical applications, such as image synthesis, design, and creative photography.

Live content is unavailable. Log in and register to view live content