Skip to yearly menu bar Skip to main content


Poster

MONTAGE: Monitoring Training for Attribution of Generative Diffusion Models

Jonathan Brokman · Omer Hofman · Roman Vainshtein · Amit Giloni · Toshiya Shimizu · Inderjeet Singh · Oren Rachmil · Alon Zolfi · Asaf Shabtai · Yuki Unno · Hisashi Kojima

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Diffusion models, which revolutionized image generation, are facing challenges related to intellectual property. These challenges arise when a generated image is influenced by one or more copyrighted images from the training data. Hence, pinpointing influential images from the training dataset, a task known as data attribution, becomes crucial for the clarity of content origins. We introduce MONTAGE, a pioneering data attribution method. Unlike existing approaches that overlook the internal workings of the training process, MONTAGE integrates a novel technique to monitor generations throughout the training via internal model representations. It is tailored for customized diffusion models, where training access is a practical assumption. This approach, coupled with a new loss function, enables enhanced accuracy as well as granularity of the attributions. The advantage of MONTAGE is evaluated in two granularity levels: Semantic concept (including mix-concept images) and individual image, showing promising results. This underlines MONTAGE's role towards solving copyright concerns in AI-generated digital art and media while enriching the understanding of the generative process.

Live content is unavailable. Log in and register to view live content