Oversampling the minority class in the feature space

Research output: Contribution to journalArticlepeer-review

Authors

  • Maria Perez-Ortiz
  • Pedro Antonio Gutiérrez
  • Peter Tino
  • César Hervás-Martínez

Colleges, School and Institutes

External organisations

  • University of Córdoba
  • Univ Cordoba

Abstract

The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.

Details

Original languageEnglish
Pages (from-to)1947-1961
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume27
Issue number9
Early online date25 Aug 2015
Publication statusPublished - Sep 2016

Keywords

  • Algorithm design and analysis, Kernel, Support vector machines, Training, Symmetric matrices, Learning systems, Eigenvalues and eigenfunctions