Projects per year
Abstract
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.
Original language | English |
---|---|
Pages (from-to) | 726-742 |
Number of pages | 17 |
Journal | IEEE Transactions on Emerging Topics in Computational Intelligence |
Volume | 5 |
Issue number | 5 |
Early online date | 24 Aug 2021 |
DOIs | |
Publication status | Published - Oct 2021 |
Bibliographical note
Funding Information:Manuscript received March 3, 2021; revised June 7, 2021; accepted July 9, 2021. Date of publication August 24, 2021; date of current version September 23, 2021. This work was supported in part by the Guangdong Provincial Key Laboratory under Grant 2020B121201001, in part by the Program for Guangdong Introducing Innovative and Entrepreneurial Teams under Grant 2017ZT07X386, in part by the Stable Support Plan Program of Shenzhen Natural Science Fund under Grant 20200925154942002, in part by the Science and Technology Commission of Shanghai Municipality under Grant 19511120602, in part by the National Leading Youth Talent Support Program of China, and in part by the MOE University Scientific-Technological Innovation Plan Program. The work of Peter Tino was supported by the European Commission Horizon 2020 Innovative Training Network SUNDIAL and also Alan Turing Institute, ATI Fellowship 1056900 (Machine Learning in the Space of Inferential Models) (Survey Network for Deep Imaging Analysis, and Learning), under Project ID: 721463. (Corresponding author: Ke Tang.) Yu Zhang is with the Guangdong Key Laboratory of Brain-Inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China, with the Research Institute of Trust-worthy Autonomous Systems, Southern University of Science and Technology, Shenzhen 518055, China, and also with the School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, U.K. (e-mail: zhangy3@mail.sustech.edu.cn).
Funding Information:
The Authors would like to thank MoD/Dstl, and EPSRC for providing the grant to support the U.K. academics involvement in the Department of Defense funded MURI project through EPSRC under Grant EP/N019415/1.
Publisher Copyright:
© 2017 IEEE.
Keywords
- inter-pretability
- Machine learning
- neural networks
- survey
ASJC Scopus subject areas
- Computer Science Applications
- Control and Optimization
- Computational Mathematics
- Artificial Intelligence
Fingerprint
Dive into the research topics of 'A Survey on Neural Network Interpretability'. Together they form a unique fingerprint.Projects
- 2 Finished
-
-
H2020_ITN_SUNDIAL_Partner
European Commission, European Commission - Management Costs
1/04/17 → 30/09/21
Project: Research