Anatomy-Aware Self-supervised Fetal MRI Synthesis from Unpaired Ultrasound Images

Jianbo Jiao*, Ana I.L. Namburete, Aris T. Papageorghiou, J. Alison Noble

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for anomaly screening. For this ultrasound (US) is employed. While expert sonographers are adept at reading US images, MR images are much easier for non-experts to interpret. Hence in this paper we seek to produce images with MRI-like appearance directly from clinical US images. Our own clinical motivation is to seek a way to communicate US findings to patients or clinical professionals unfamiliar with US, but in medical image analysis such a capability is potentially useful, for instance, for US-MRI registration or fusion. Our model is self-supervised and end-to-end trainable. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise an extractor to determine shared latent features, which are then used for data synthesis. Since paired data was unavailable for our study (and rare in practice), we propose to enforce the distributions to be similar instead of employing pixel-wise constraints, by adversarial learning in both the image domain and latent space. Furthermore, we propose an adversarial structural constraint to regularise the anatomical structures between the two modalities during the synthesis. A cross-modal attention scheme is proposed to leverage non-local spatial correlations. The feasibility of the approach to produce realistic looking MR images is demonstrated quantitatively and with a qualitative evaluation compared to real fetal MR images.

Original languageEnglish
Title of host publicationMachine Learning in Medical Imaging - 10th International Workshop, MLMI 2019, Held in Conjunction with MICCAI 2019, Proceedings
EditorsHeung-Il Suk, Mingxia Liu, Chunfeng Lian, Pingkun Yan
PublisherSpringer Vieweg
Pages178-186
Number of pages9
ISBN (Print)9783030326913
DOIs
Publication statusPublished - 2019
Event10th International Workshop on Machine Learning in Medical Imaging, MLMI 2019 held in conjunction with the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019 - Shenzhen, China
Duration: 13 Oct 201913 Oct 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11861 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference10th International Workshop on Machine Learning in Medical Imaging, MLMI 2019 held in conjunction with the 22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
Country/TerritoryChina
CityShenzhen
Period13/10/1913/10/19

Bibliographical note

Funding Information:
the volunteers for assessing images, NVIDIA Corporation for a GPU donation, and acknowledge the ERC (ERC-ADG-2015 694581), the EPSRC (EP/M013774/1, EP/R013853/1), the Royal Academy of Engineering Research Fellowship programme and the NIHR Biomedical Research Centre funding scheme.

Publisher Copyright:
© 2019, Springer Nature Switzerland AG.

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Anatomy-Aware Self-supervised Fetal MRI Synthesis from Unpaired Ultrasound Images'. Together they form a unique fingerprint.

Cite this