Previous techniques for face reenactment and swapping predominantly rely on GAN frameworks. However, recent research has shifted its focus towards leveraging large diffusion models for these tasks, owing to their superior generation capabilities. Nonetheless, training these models incurs significant computational costs, and the results have not yet attained satisfactory performance levels. To address this issue, we introduce Face-Adapter, an efficient and effective adapter designed for high-precision and high-fidelity face editing by pretrained diffusion models, which contains: 1) Spatial Condition Generator provides precise landmarks and background; 2) Plug-and-play Identity Encoder transfers face embeddings to the text space by a transformer decoder. 3) Attribute Controller integrates spatial condition and detailed attributes. Face-Adapter achieves comparable or even superior performance in terms of motion control precision, ID retention capability, and generation quality compared to fully fine-tuned models in face reenactment/swapping tasks. Additionally, Face-Adapter seamlessly integrates with popular pretrained diffusion models such as StableDiffusion. Full codes will be available.
Live content is unavailable. Log in and register to view live content