Abstract
Prototype based classifiers are effective algorithms in modeling classification problems and have been applied in multiple domains. While many supervised learning algorithms have been successfully extended to kernels to improve the discrimination power by means of the kernel concept, prototype based classifiers are typically still used with Euclidean distance measures. Kernelized variants of prototype based classifiers are currently too complex to be applied for larger data sets. Here we propose an extension of Kernelized Generalized Learning Vector Quantization (KGLVQ) employing a sparsity and approximation technique to reduce the learning complexity. We provide generalization error bounds and experimental results on real world data, showing that the extended approach is comparable to SVM on different public data.
Original language | English |
---|---|
Pages (from-to) | 443-457 |
Number of pages | 15 |
Journal | International Journal of Neural Systems |
Volume | 21 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1 Dec 2011 |
Keywords
- vector quantization
- interpretable models
- Prototype learning
- classification
- kernel