Skip to yearly menu bar Skip to main content


Poster

Finding Visual Task Vectors

Alberto Hojel · Yutong Bai · Trevor Darrell · Amir Globerson · Amir Bar

[ ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Visual Prompting is a technique for teaching models to perform a visual task via in-context examples, and without any additional training. In this work, we analyze the activations of MAE-VQGAN, a recent Visual Prompting model, and find Task Vectors, activations that encode task specific information. Equipped with this insight, we demonstrate that it is possible to identify the task vectors and use them to guide the network towards performing different tasks without providing any input-output example(s). We propose a two-step approach to identifying task vectors. First, we rank the model activations by a relevance score, then apply a simple greedy search algorithm to select the task vectors from the top scoring activations. Surprisingly, by patching the resulting task vectors it is possible to control the desired task output and achieve performance that is competitive with the original model across multiple tasks while reducing the need for input-output example(s).

Live content is unavailable. Log in and register to view live content