Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis

Jianbo Jiao*, Ana I. L. Namburete, Aris T. Papageorghiou, J. Alison Noble

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)


Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening, for which ultrasound (US) is employed. Although expert sonographers are adept at reading US images, MR images which closely resemble anatomical images are much easier for non-experts to interpret. Thus in this article we propose to generate MR-like images directly from clinical US images. In medical image analysis such a capability is potentially useful as well, for instance for automatic US-MRI registration and fusion. The proposed model is end-to-end trainable and self-supervised without any external annotations. Specifically, based on an assumption that the US and MRI data share a similar anatomical latent space, we first utilise a network to extract the shared latent features, which are then used for MRI synthesis. Since paired data is unavailable for our study (and rare in practice), pixel-level constraints are infeasible to apply. We instead propose to enforce the distributions to be statistically indistinguishable, by adversarial learning in both the image domain and feature space. To regularise the anatomical structures between US and MRI during synthesis, we further propose an adversarial structural constraint. A new cross-modal attention technique is proposed to utilise non-local spatial information, by encouraging multi-modal knowledge fusion and propagation. We extend the approach to consider the case where 3D auxiliary information (e.g., 3D neighbours and a 3D location index) from volumetric data is also available, and show that this improves image synthesis. The proposed approach is evaluated quantitatively and qualitatively with comparison to real fetal MR images and other approaches to synthesis, demonstrating its feasibility of synthesising realistic MR images.

Original languageEnglish
Article number9174648
Pages (from-to)4413-4424
Number of pages12
JournalIEEE Transactions on Medical Imaging
Issue number12
Early online date24 Aug 2020
Publication statusPublished - Dec 2020

Bibliographical note

Funding Information:
Manuscript received June 8, 2020; revised August 11, 2020; accepted August 12, 2020. Date of publication August 24, 2020; date of current version November 30, 2020. This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) through the Seebibyte Project under Grant EP/M013774/1 and the Computer Assisted Low Cost Point of Care Ultrasound (CALOPUS) Project under Grant EP/R013853/1; in part by the European Research Council (ERC) through the Perception Ultrasound by Learning Sonographic Experience (PULSE) Project under Grant ERC-ADG-2015 694581; and in part by the National Institute for Health Research (NIHR) Biomedical Research Centre Funding Scheme. (Corresponding author: Jianbo Jiao.) Jianbo Jiao, Ana I. L. Namburete, and J. Alison Noble are with the Department of Engineering Science, University of Oxford, Oxford OX1 2JD, U.K. (e-mail:; ana.namburete@;

The authors would like to thank Andrew Zisserman for many helpful discussions, the volunteers for assessing images, and NVIDIA Corporation for the Titan V GPU donation. Ana Namburete is grateful for support from the UK Royal Academy of Engineering under its Engineering for Development Research Fellowships scheme.

Publisher Copyright:
© 1982-2012 IEEE.


  • Brain/diagnostic imaging
  • Fetus/diagnostic imaging
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging
  • Neuroimaging
  • Ultrasonography

ASJC Scopus subject areas

  • Software
  • Radiological and Ultrasound Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis'. Together they form a unique fingerprint.

Cite this