Show from Tell: Audio-Visual Modelling in Clinical Settings

Jianbo Jiao*, Mohammad Alsharid, Lior Drukker, Aris T. Papageorghiou, Andrew Zisserman, J. Alison Noble*

*Corresponding author for this work

Research output: Working paper/PreprintPreprint

238 Downloads (Pure)

Abstract

Auditory and visual signals usually present together and correlate with each other, not only in natural environments but also in clinical settings. However, the audio-visual modelling in the latter case can be more challenging, due to the different sources of audio/video signals and the noise (both signal-level and semantic-level) in auditory signals -- usually speech. In this paper, we consider audio-visual modelling in a clinical setting, providing a solution to learn medical representations that benefit various clinical tasks, without human expert annotation. A simple yet effective multi-modal self-supervised learning framework is proposed for this purpose. The proposed approach is able to localise anatomical regions of interest during ultrasound imaging, with only speech audio as a reference. Experimental evaluations on a large-scale clinical multi-modal ultrasound video dataset show that the proposed self-supervised method learns good transferable anatomical representations that boost the performance of automated downstream clinical tasks, even outperforming fully-supervised solutions.
Original languageEnglish
PublisherarXiv
Pages1-12
Number of pages12
DOIs
Publication statusPublished - 25 Oct 2023

Keywords

  • cs.CV

Fingerprint

Dive into the research topics of 'Show from Tell: Audio-Visual Modelling in Clinical Settings'. Together they form a unique fingerprint.

Cite this