Skip to yearly menu bar Skip to main content


Poster

Layer-Wise Relevance Propagation with Conservation Property for ResNet

Seitaro Otsuki · Tsumugi Iida · Félix Doublet · Tsubasa Hirakawa · Takayoshi Yamashita · Hironobu Fujiyoshi · Komei Sugiura

# 35
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Transparent formulation of explanation methods is essential for elucidating the predictions of neural network models, which are commonly of a black-box nature. Layer-wise Relevance Propagation (LRP) stands out as a well-established method that transparently traces the flow of a model’s prediction backward through its architecture by backpropagating relevance scores. However, LRP has not fully considered the existence of a skip connection, and its application to the widely used ResNet architecture has not been thoroughly explored. In this study, we extend LRP to the ResNet models by introducing relevance splitting at a point where outputs from a skip connection and a residual block converge. Moreover, our formulation ensures that the conservation property is maintained throughout the process, thereby preserving the integrity of the generated explanations. To evaluate the effectiveness of our approach, we conduct the experiments on ImageNet and CUB dataset. Our method demonstrated superior performance compared to baseline methods in standard evaluation metrics such as Insertion-Deletion score while maintaining its conservation property.

Live content is unavailable. Log in and register to view live content