Explainable anatomical shape analysis through deep hierarchical generative models

Research output: Contribution to journalArticlepeer-review

Standard

Explainable anatomical shape analysis through deep hierarchical generative models. / Biffi, Carlo; Cerrolaza, Juan J; Tarroni, Giacomo; Bai, Wenjia; de Marvao, Antonio; Oktay, Ozan; Ledig, Christian; Le Folgoc, Loic; Kamnitsas, Konstantinos; Doumou, Georgia; Duan, Jinming; Prasad, Sanjay K; Cook, Stuart A; O'Regan, Declan P; Rueckert, Daniel.

In: IEEE Transactions on Medical Imaging, Vol. 39, No. 6, 06.2020, p. 2088-2099.

Research output: Contribution to journalArticlepeer-review

Harvard

Biffi, C, Cerrolaza, JJ, Tarroni, G, Bai, W, de Marvao, A, Oktay, O, Ledig, C, Le Folgoc, L, Kamnitsas, K, Doumou, G, Duan, J, Prasad, SK, Cook, SA, O'Regan, DP & Rueckert, D 2020, 'Explainable anatomical shape analysis through deep hierarchical generative models', IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 2088-2099. https://doi.org/10.1109/TMI.2020.2964499

APA

Biffi, C., Cerrolaza, J. J., Tarroni, G., Bai, W., de Marvao, A., Oktay, O., Ledig, C., Le Folgoc, L., Kamnitsas, K., Doumou, G., Duan, J., Prasad, S. K., Cook, S. A., O'Regan, D. P., & Rueckert, D. (2020). Explainable anatomical shape analysis through deep hierarchical generative models. IEEE Transactions on Medical Imaging, 39(6), 2088-2099. https://doi.org/10.1109/TMI.2020.2964499

Vancouver

Author

Biffi, Carlo ; Cerrolaza, Juan J ; Tarroni, Giacomo ; Bai, Wenjia ; de Marvao, Antonio ; Oktay, Ozan ; Ledig, Christian ; Le Folgoc, Loic ; Kamnitsas, Konstantinos ; Doumou, Georgia ; Duan, Jinming ; Prasad, Sanjay K ; Cook, Stuart A ; O'Regan, Declan P ; Rueckert, Daniel. / Explainable anatomical shape analysis through deep hierarchical generative models. In: IEEE Transactions on Medical Imaging. 2020 ; Vol. 39, No. 6. pp. 2088-2099.

Bibtex

@article{2c154fea176545fab5d1da92a059874c,
title = "Explainable anatomical shape analysis through deep hierarchical generative models",
abstract = "Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging.",
keywords = "MRI, Shape analysis, explainable deep learning, generative modeling",
author = "Carlo Biffi and Cerrolaza, {Juan J} and Giacomo Tarroni and Wenjia Bai and {de Marvao}, Antonio and Ozan Oktay and Christian Ledig and {Le Folgoc}, Loic and Konstantinos Kamnitsas and Georgia Doumou and Jinming Duan and Prasad, {Sanjay K} and Cook, {Stuart A} and O'Regan, {Declan P} and Daniel Rueckert",
year = "2020",
month = jun,
doi = "10.1109/TMI.2020.2964499",
language = "English",
volume = "39",
pages = "2088--2099",
journal = "IEEE Transactions on Medical Imaging",
issn = "0278-0062",
publisher = "Institute of Electrical and Electronics Engineers (IEEE)",
number = "6",

}

RIS

TY - JOUR

T1 - Explainable anatomical shape analysis through deep hierarchical generative models

AU - Biffi, Carlo

AU - Cerrolaza, Juan J

AU - Tarroni, Giacomo

AU - Bai, Wenjia

AU - de Marvao, Antonio

AU - Oktay, Ozan

AU - Ledig, Christian

AU - Le Folgoc, Loic

AU - Kamnitsas, Konstantinos

AU - Doumou, Georgia

AU - Duan, Jinming

AU - Prasad, Sanjay K

AU - Cook, Stuart A

AU - O'Regan, Declan P

AU - Rueckert, Daniel

PY - 2020/6

Y1 - 2020/6

N2 - Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging.

AB - Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging.

KW - MRI

KW - Shape analysis

KW - explainable deep learning

KW - generative modeling

UR - http://www.scopus.com/inward/record.url?scp=85085903942&partnerID=8YFLogxK

U2 - 10.1109/TMI.2020.2964499

DO - 10.1109/TMI.2020.2964499

M3 - Article

C2 - 31944949

VL - 39

SP - 2088

EP - 2099

JO - IEEE Transactions on Medical Imaging

JF - IEEE Transactions on Medical Imaging

SN - 0278-0062

IS - 6

ER -