Skip to yearly menu bar Skip to main content


Poster

Graph Neural Network Causal Explanation via Neural Causal Models

Arman Behnam · Binghui Wang

# 95
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ] [ Paper PDF ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Graph neural network (GNN) explainers for graph classification aim to identify the important subgraph that ensures the prediction for a given graph. Until now, most of the existing GNN explainers are based on association, and a few are causality-inspired but they are associated-based in essence. Associated-based explainers are shown to be prone to spurious correlations. We propose CXGNN, a GNN causal explainer via causal inference. Our explainer is based on the observation that a graph often consists of a causal subgraph. Specifically, CXGNN includes three main steps: 1) Building causal structure and the corresponding structural causal model (SCM) for a graph, which enables the cause-effect calculation among nodes. 2) Directly calculating the cause-effect in real-world graphs is computationally challenging. We are then enlightened by the recently proposed neural causal model (NCM), a special type of SCM that is trainable, and the design of customized NCMs for GNNs. By training these GNN NCMs, the cause-effect can be easily calculated. 3) We uncover the subgraph that causally explains the GNN predictions via the well-trained GNN NCMs. Evaluation results on multiple synthetic and real-world graphs validate that CXGNN significantly outperforms the existing GNN explainers in exactly finding the ground-truth explanations.

Live content is unavailable. Log in and register to view live content