Interpretable modelling and visualization of biomedical data

  • S. Ghosh*
  • , E.S. Baranowski
  • , M. Biehl
  • , W. Arlt
  • , P. Tiňo
  • , K. Bunte
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Applications of interpretable machine learning (ML) techniques on medical datasets facilitate early and fast diagnoses, along with getting deeper insight into the data. Furthermore, the transparency of these models increase trust among application domain experts. Medical datasets face common issues such as heterogeneous measurements, imbalanced classes with limited sample size, and missing data, which hinder the straightforward application of ML techniques. In this paper we present a family of prototype-based (PB) interpretable models which are capable of handling these issues. Moreover we propose a strategy of harnessing the power of ensembles while maintaining the intrinsic interpretability of the PB models, by averaging over the model parameter manifolds. All the models were evaluated on a synthetic (publicly available dataset) in addition to detailed analyses of two real-world medical datasets (one publicly available). The models and strategies we introduce address the challenges of real-world medical data, while remaining computationally inexpensive and transparent. Moreover, they exhibit similar or superior in performance compared to alternative techniques.
Original languageEnglish
Article number129405
Number of pages19
JournalNeurocomputing
Volume626
Early online date30 Jan 2025
DOIs
Publication statusPublished - 14 Apr 2025

Fingerprint

Dive into the research topics of 'Interpretable modelling and visualization of biomedical data'. Together they form a unique fingerprint.

Cite this