Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound

Zeyu Fu*, Jianbo Jiao, Robail Yasrab, Lior Drukker, Aris T. Papageorghiou, J. Alison Noble

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Self-supervised contrastive representation learning offers the advantage of learning meaningful visual representations from unlabeled medical datasets for transfer learning. However, applying current contrastive learning approaches to medical data without considering its domain-specific anatomical characteristics may lead to visual representations that are inconsistent in appearance and semantics. In this paper, we propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL), which incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner. The proposed approach is demonstrated for automated fetal ultrasound imaging tasks, enabling the positive pairs from the same or different ultrasound scans that are anatomically similar to be pulled together and thus improving the representation learning. We empirically investigate the effect of inclusion of anatomy information with coarse- and fine-grained granularity, for contrastive learning and find that learning with fine-grained anatomy information which preserves intra-class difference is more effective than its counterpart. We also analyze the impact of anatomy ratio on our AWCL framework and find that using more distinct but anatomically similar samples to compose positive pairs results in better quality representations. Experiments on a large-scale fetal ultrasound dataset demonstrate that our approach is effective for learning representations that transfer well to three clinical downstream tasks, and achieves superior performance compared to ImageNet supervised and the current state-of-the-art contrastive learning methods. In particular, AWCL outperforms ImageNet supervised method by 13.8% and state-of-the-art contrastive-based method by 7.1% on a cross-domain segmentation task.

Original languageEnglish
Title of host publicationECCV 2022: Computer Vision – ECCV 2022 Workshops
Subtitle of host publicationTel Aviv, Israel, October 23–27, 2022, Proceedings, Part III
EditorsLeonid Karlinsky, Tomer Michaeli, Ko Nishino
Place of PublicationCham
PublisherSpringer
Pages422-436
Number of pages15
Volume2022
Edition1
ISBN (Electronic)9783031250668
ISBN (Print)9783031250651
DOIs
Publication statusPublished - 18 Feb 2023
Event17th European Conference on Computer Vision, ECCV 2022 - Tel Aviv, Israel
Duration: 23 Oct 202227 Oct 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13803 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th European Conference on Computer Vision, ECCV 2022
Country/TerritoryIsrael
CityTel Aviv
Period23/10/2227/10/22

Bibliographical note

Funding Information:
Acknowledgement. The authors would like to thank Lok Hin Lee, Richard Droste, Yuan Gao and Harshita Sharma for their help with data preparation. This work is supported by the EPSRC Programme Grants Visual AI (EP/T028572/1) and See-bibyte (EP/M013774/1), the ERC Project PULSE (ERC-ADG-2015 694581), the NIH grant U01AA014809, and the NIHR Oxford Biomedical Research Centre. The NVIDIA Corporation is thanked for a GPU donation.

Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Keywords

  • Contrastive learning
  • Representation learning
  • Ultrasound

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound'. Together they form a unique fingerprint.

Cite this