Representation Disentanglement for Multi-task Learning with application to Fetal Ultrasound

Qingjie Meng*, Nick Pawlowski, Daniel Rueckert, Bernhard Kainz

*Corresponding author for this work

Research output: Working paper/PreprintPreprint

23 Downloads (Pure)

Abstract

One of the biggest challenges for deep learning algorithms in medical image analysis is the indiscriminate mixing of image properties, e.g. artifacts and anatomy. These entangled image properties lead to a semantically redundant feature encoding for the relevant task and thus lead to poor generalization of deep learning algorithms. In this paper we propose a novel representation disentanglement method to extract semantically meaningful and generalizable features for different tasks within a multi-task learning framework. Deep neural networks are utilized to ensure that the encoded features are maximally informative with respect to relevant tasks, while an adversarial regularization encourages these features to be disentangled and minimally informative about irrelevant tasks. We aim to use the disentangled representations to generalize the applicability of deep neural networks. We demonstrate the advantages of the proposed method on synthetic data as well as fetal ultrasound images. Our experiments illustrate that our method is capable of learning disentangled internal representations. It outperforms baseline methods in multiple tasks, especially on images with new properties, e.g. previously unseen artifacts in fetal ultrasound.
Original languageEnglish
PublisherarXiv
Pages1-10
Number of pages10
DOIs
Publication statusPublished - 21 Aug 2019

Keywords

  • cs.LG
  • eess.IV
  • stat.ML

Fingerprint

Dive into the research topics of 'Representation Disentanglement for Multi-task Learning with application to Fetal Ultrasound'. Together they form a unique fingerprint.

Cite this