Poster
CLIP-Guided Generative Networks for Transferable Targeted Adversarial Attacks
Hao Fang · Jiawei Kong · Bin Chen · Tao Dai · Hao Wu · Shu-Tao Xia
# 36
Strong Double Blind |
Transferable targeted adversarial attacks aim to mislead models into outputting adversary-specified predictions in black-box scenarios, raising significant security concerns. Recent studies have introduced \textit{single-target} generative attacks that train a generator for each target class to generate highly transferable perturbations, resulting in substantial computational overhead when handling multiple classes. \textit{Multi-target} attacks address this by training only one class-conditional generator for multiple classes. However, the generator simply uses class labels as conditions, failing to leverage the rich semantic information of the target class. To this end, we design a \textbf{C}LIP-guided \textbf{G}enerative \textbf{N}etwork with \textbf{C}ross-attention modules (CGNC) to enhance multi-target attacks by incorporating textual knowledge of CLIP into the generator. Extensive experiments demonstrate that CGNC yields significant improvements over previous multi-target generative attacks, e.g., a 21.46\% improvement in success rate from ResNet-152 to DenseNet-121. Moreover, we propose a masked fine-tuning mechanism to further strengthen our method in attacking a single class, which surpasses existing single-target methods.
Live content is unavailable. Log in and register to view live content