Skip to yearly menu bar Skip to main content


Poster

Optimization-based Uncertainty Attribution Via Learning Informative Perturbations

Hanjing Wang · Bashirul Azam Biswas · Qiang Ji

# 17
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Uncertainty Attribution (UA) aims to understand the sources of uncertainty in deep learning models by identifying key contributors to predictive uncertainty. To improve the faithfulness of existing UA methods, we formulate UA as an optimization problem to learn a binary mask on the input, identifying regions that significantly contribute to output uncertainty. The learned mask allows uncertainty reduction through learning informative perturbations on the masked input. Our method enhances UA interpretability and maintains high efficiency by integrating three key improvements: Segment Anything Model (SAM)-guided mask parameterization for efficient and interpretable mask learning; learnable perturbations that surpass traditional techniques by adaptively targeting and refining problematic regions specific to each input without the need for manually tuning the perturbation parameters; and a novel application of Gumbel-sigmoid reparameterization for efficiently learning Bernoulli-distributed binary masks under continuous optimization. Our experiments on the detection of problematic regions and faithfulness tests demonstrate our method's superiority over state-of-the-art UA methods.

Live content is unavailable. Log in and register to view live content