TY - GEN
T1 - Compressive Mahalanobis Metric Learning Adapts to Intrinsic Dimension
AU - Palias, Efstratios
AU - Kaban, Ata
PY - 2024/9/9
Y1 - 2024/9/9
N2 - Metric learning aims at finding a suitable distance metric over the input space, to improve the performance of distance-based learning algorithms. In high-dimensional settings, it can also serve as dimensionality reduction by imposing a low-rank restriction to the learnt metric. In this paper, we consider the problem of learning a Mahalanobis metric, and instead of training a low-rank metric on high-dimensional data, we use a randomly compressed version of the data to train a full-rank metric in this reduced feature space. We give theoretical guarantees on the error for Mahalanobis metric learning, which depend on the stable dimension of the data support, but not on the ambient dimension. Our bounds make no assumptions aside from i.i.d. data sampling from a bounded support, and automatically tighten when benign geometrical structures are present. An important ingredient is an extension of Gordon’s theorem, which may be of independent interest. We also corroborate our findings by numerical experiments.
AB - Metric learning aims at finding a suitable distance metric over the input space, to improve the performance of distance-based learning algorithms. In high-dimensional settings, it can also serve as dimensionality reduction by imposing a low-rank restriction to the learnt metric. In this paper, we consider the problem of learning a Mahalanobis metric, and instead of training a low-rank metric on high-dimensional data, we use a randomly compressed version of the data to train a full-rank metric in this reduced feature space. We give theoretical guarantees on the error for Mahalanobis metric learning, which depend on the stable dimension of the data support, but not on the ambient dimension. Our bounds make no assumptions aside from i.i.d. data sampling from a bounded support, and automatically tighten when benign geometrical structures are present. An important ingredient is an extension of Gordon’s theorem, which may be of independent interest. We also corroborate our findings by numerical experiments.
KW - Mahalanobis metric learning
KW - generalisation analysis
KW - random projection
KW - intrinsic dimension
UR - https://ieeexplore.ieee.org/xpl/conhome/1000500/all-proceedings
UR - https://2024.ieeewcci.org/
U2 - 10.1109/IJCNN60899.2024.10649958
DO - 10.1109/IJCNN60899.2024.10649958
M3 - Conference contribution
SN - 9798350359329 (PoD)
T3 - Proceedings of International Joint Conference on Neural Networks
BT - 2024 International Joint Conference on Neural Networks (IJCNN)
PB - IEEE
T2 - 2024 IEEE World Congress on Computational Intelligence
Y2 - 30 June 2024 through 5 July 2024
ER -