跳转至

Paper Index of XAI for GNNs in ICDE

Year 2024

1. SES: Bridging the Gap Between Explainability and Prediction of Graph Neural Networks.(1) Zhenhua Huang, Kunhao Li, Shaojie Wang, Zhaohong Jia, Wentao Zhu, Sharad Mehrotra. [paper]

  1. Abstract

    Despite the Graph Neural Networks' (GNNs) pro-ficiency in analyzing graph data, achieving high-accuracy and interpretable predictions remains challenging. Existing GNN interpreters typically provide post-hoc explanations disjointed from GNNs' predictions, resulting in misrepresentations. Self-explainable GNNs offer built-in explanations during the training process. However, they cannot exploit the explanatory outcomes to augment prediction performance, and they fail to provide high-quality explanations of node features and require additional processes to generate explainable subgraphs, which is costly. To address the aforementioned limitations, we propose a self-explained and self-supervised graph neural network (SES) to bridge the gap between explainability and prediction. SES comprises two processes: explainable training and enhanced predictive learning. During explainable training, SES employs a global mask generator co-trained with a graph encoder and directly produces crucial structure and feature masks, reducing time consumption and providing node feature and subgraph explanations. In the enhanced predictive learning phase, mask-based positive-negative pairs are constructed utilizing the ex-planations to compute a triplet loss and enhance the node representations by contrastive learning. Extensive experiments demonstrate the superiority of SES on multiple datasets and tasks. SES outperforms baselines on real-world node classification datasets by notable margins of up to 2.59% and achieves state-of-the-art (SOTA) performance in explanation tasks on synthetic datasets with improvements of up to 3.0%. Moreover, SES delivers more coherent explanations on real-world datasets, has a fourfold increase in Fidelity+ score for explanation quality, and demonstrates faster training and expla-nation generating times. To our knowledge, SES is a pioneering GNN to achieve SOTA performance on both explanation and prediction tasks.

Year 2023

1. A Bayesian Graph Neural Network for EEG Classification - A Win-Win on Performance and Interpretability.(1) Jing Wang, Xiaojun Ning, Wangjun Shi, Youfang Lin. [paper]

  1. Abstract

    With the deepening of neuroscience research, data mining of brain signals is becoming an emerging topic. Among various brain signals, electroencephalography (EEG) has attracted more and more attention due to its advantages of non-invasiveness, portability, and low cost. EEG modeling and analysis play a vital role in human healthcare. Although many machine learning algorithms have been successfully applied to data mining of EEG signals, few of them achieve a win-win in classification performance and interpretability. In this paper, we propose a Bayesian graph neural network named BayesEEGNet. Considering an electrical impulse between two nodes in the brain as a Poisson process, the countless electrical impulses generated by the brain in a period are represented as an infinite number of connection probability graphs. After coupling and transforming these probability graphs, we interpret the brain’s electrical activity state as the brain’s perceptual state. Benefiting from the joint optimization of Bayesian modules and deep neural networks, our model shows superior classification performance in sleep stage classification and emotion recognition tasks. Meanwhile, our model is able to learn interpretable functional connectivity relationships between EEG channels without any prior knowledge.