Structure from Randomness in Halfspace Learning with the Zero-One Loss

Ata Kaban, Robert J. Durrant

Research output: Contribution to journalArticlepeer-review

183 Downloads (Pure)

Abstract

We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger than the sample size, using a notion of compressibility by random projection. In particular, we give upper bounds for the empirical risk minimizer learned efficiently from randomly projected data, as well as uniform upper bounds in the full high-dimensional space. Our main findings are the following: i) In both settings, the obtained bounds are able to discover and take advantage of benign geometric structure, which turns out to depend on the cosine similarities between the classifier and points of the input space, and provide a new interpretation of margin distribution type arguments. ii) Furthermore our bounds allow us to draw new connections between several existing successful classification algorithms, and we also demonstrate that our theory is predictive of empirically observed performance in numerical simulations and experiments. iii) Taken together, these results suggest that the study of compressive learning can improve our understanding of which benign structural traits - if they are possessed by the data generator - make it easier to
learn an effective classifier from a sample.
Original languageEnglish
Number of pages32
JournalJournal of Artificial Intelligence Research
Publication statusAccepted/In press - 15 Sep 2020

Fingerprint

Dive into the research topics of 'Structure from Randomness in Halfspace Learning with the Zero-One Loss'. Together they form a unique fingerprint.

Cite this