Interpreting node embedding with text-labeled graphs

Giuseppe Serra, Zhao Xu, Mathias Niepert, Carolin Lawrence, Peter Tiňo, Xin Yao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

62 Downloads (Pure)

Abstract

Graph neural networks have recently received increasing attention. These methods often map nodes into latent spaces and learn vector representations of the nodes for a variety of downstream tasks. To gain trust and to promote collaboration between AIs and humans, it would be better if those representations were interpretable for humans. However, most explainable AIs focus on a supervised learning setting and aim to answer the following question: “Why does the model predict y for an input x?”. For an unsupervised learning setting as node embedding, interpretation can be more complicated since the embedding vectors are usually not understandable for humans. On the other hand, nodes and edges in a graph are often associated with texts in many real-world applications. A question naturally arises: could we integrate the human-understandable textural data into graph learning to facilitate interpretable node embedding? In this paper we present interpretable graph neural networks (iGNN), a model to learn textual explanations for node representations modeling the extra information contained in the associated textual data. To validate the performance of the proposed method, we investigate the learned interpretability of the embedding vectors and use functional interpretability to measure it. Experimental results on multiple text-labeled graphs show the effectiveness of the iGNN model on learning textual explanations of node embedding while performing well in downstream tasks.
Original languageEnglish
Title of host publication2021 International Joint Conference on Neural Networks (IJCNN)
PublisherIEEE
Pages1-8
Number of pages8
ISBN (Electronic)9781665439008, 9780738133669
ISBN (Print)9781665445979 (PoD)
DOIs
Publication statusPublished - 20 Sept 2021
Event2021 International Joint Conference on Neural Networks (IJCNN) - Shenzhen, China
Duration: 18 Jul 202122 Jul 2021

Publication series

NameInternational Joint Conference on Neural Networks (IJCNN)
PublisherIEEE
ISSN (Print)2161-4393
ISSN (Electronic)2161-4407

Conference

Conference2021 International Joint Conference on Neural Networks (IJCNN)
Period18/07/2122/07/21

Bibliographical note

Funding Information:
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 766186.

Publisher Copyright:
© 2021 IEEE.

Keywords

  • Training
  • Statistical analysis
  • Supervised learning
  • Collaboration
  • Predictive models
  • Linear programming
  • Graph neural networks
  • Node embedding
  • interpretability
  • text mining

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Interpreting node embedding with text-labeled graphs'. Together they form a unique fingerprint.

Cite this