Skip to yearly menu bar Skip to main content


Poster

MultiDelete for Multimodal Machine Unlearning

Jiali Cheng · Hadi Amiri

# 83
[ ] [ Paper PDF ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Machine Unlearning removes specific knowledge about training data samples corresponding effects from an already trained model. It has significant practical benefits, such as purging private, inaccurate, or outdated information from trained models without the need for complete re-training. Unlearning within a multimodal setting presents unique challenges due to the intrinsic ependencies between different data modalities and the expensive cost of training on large multimodal datasets and architectures. This paper presents the first machine unlearning approach for multimodal data and models, titled MultiDelete, which is designed to decouple associations between unimodal data points during unlearning without losing the overall representation strength of the trained model. MultiDelete advocates for three key properties for effective multimodal unlearning: (a): modality decoupling, which effectively decouples the association between individual unimodal data points marked for deletion, rendering them as unrelated data points, (b): multimodal knowledge retention, which retains the multimodal representation capability of the model post-unlearning, and (c): unimodal knowledge retention, which retains the unimodal representation capability of the model post-unlearning. MultiDelete is efficient to train and is not constrained by using a strongly convex loss--a common restriction among many existing baselines. Experiments on two multimodal architectures and four datasets, including image-text and graph-text datasets, show that MultiDelete gains an average improvement of 17.6 points over best performing baseline in unlearning multimodal training samples, can maintain the multimodal and unimodal knowledge of the original model post unlearning, can provide better protection to unlearned data, and is robust against adversarial attacks

Live content is unavailable. Log in and register to view live content