Skip to yearly menu bar Skip to main content


Poster

Diffusion Soup: Model Merging for Text-to-Image Diffusion Models

Benjamin J Biggs · Arjun Seshadri · Yang Zou · Achin Jain · Aditya Golatkar · Yusheng Xie · Alessandro Achille · Ashwin Swaminathan · Stefano Soatto

[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract: We present Diffusion Soup, a compartmentalization method for Text-to-Image Generation that averages the weights of diffusion models trained on sharded data. By construction, our approach enables training-free continual learning and unlearning with no additional memory or inference costs, since models corresponding to data shards can be added or removed by reaveraging. We show that a Diffusion Soup samples from a point in weight space that approximates the geometric mean of the distributions of constituent datasets, which reduces model memorization, offers copyright protection guarantees, and enables zero-shot style mixing. Empirically, Diffusion Soup outperforms a paragon model trained on the union of all data shards and achieves a 30\% improvement in Image Reward (.34 $\to$ .44) on a domain sharded data, and a 59\% improvement in IR (.37 $\to$ .59) on aesthetic data. In both cases, souping also prevails in TIFA score (respectively, 85.5 $\to$ 86.5 and 85.6 $\to$ 86.8). We demonstrate robust unlearning---removing any individual domain shard only lowers performance by 1\% in IR (.45 $\to$ .44)---and validate our theoretical insights on copyright protection on real data. Finally, we showcase Diffusion Soup's ability to blend the distinct styles of models finetuned on different shards, resulting in the zero-shot generation of hybrid styles.

Live content is unavailable. Log in and register to view live content