Projects per year
Abstract
Applications of interpretable machine learning (ML) techniques on medical datasets facilitate early and fast diagnoses, along with getting deeper insight into the data. Furthermore, the transparency of these models increase trust among application domain experts. Medical datasets face common issues such as heterogeneous measurements, imbalanced classes with limited sample size, and missing data, which hinder the straightforward application of ML techniques. In this paper we present a family of prototype-based (PB) interpretable models which are capable of handling these issues. Moreover we propose a strategy of harnessing the power of ensembles while maintaining the intrinsic interpretability of the PB models, by averaging over the model parameter manifolds. All the models were evaluated on a synthetic (publicly available dataset) in addition to detailed analyses of two real-world medical datasets (one publicly available). The models and strategies we introduce address the challenges of real-world medical data, while remaining computationally inexpensive and transparent. Moreover, they exhibit similar or superior in performance compared to alternative techniques.
| Original language | English |
|---|---|
| Article number | 129405 |
| Number of pages | 19 |
| Journal | Neurocomputing |
| Volume | 626 |
| Early online date | 30 Jan 2025 |
| DOIs | |
| Publication status | Published - 14 Apr 2025 |
Fingerprint
Dive into the research topics of 'Interpretable modelling and visualization of biomedical data'. Together they form a unique fingerprint.Projects
- 1 Finished
-
H2020_MSCA-IFEF_LESODYMAS
Tino, P. (Principal Investigator) & Arlt, W. (Co-Investigator)
13/07/15 → 12/07/17
Project: EU