Compressive Learning of Multi-layer Perceptrons: An Error Analysis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Authors

Colleges, School and Institutes

Abstract

We consider the class of 2-layer feed-forward neural networks with sigmoidal activations – one of the oldest blackbox learning machines – and ask the question: Under what conditions can it provably learn from a random linear projection of the data? Due to the speedy increase of dimensionality of modern data sets, and the development of novel data acquisition techniques in the area of compressed sensing, an answer to this question is of both practical and theoretical relevance. Part of this question has been previously attempted in the literature: A high probability bound has been given on the absolute difference
between the outputs of the network on the sample before and after random projection – provided that the target dimension is at least Ω(M2(logMN)), where M is the size of the hidden layer, and N is the number of training points. By contrast, in this paper we show that a target dimension independent of both N and M suffices and is able to ensure good generalisation for learning the network on randomly projected data. We do not require a sparse representation of the data, instead our target dimension bound depends on the regularity of the problem expressed as norms of the weights. These are uncovered in our analysis by the use of random projection, which fulfils a regularisation role on the
input layer weights.

Details

Original languageEnglish
Title of host publication Proceedings of 2019 International Joint Conference on Neural Networks (IJCNN)
Publication statusAccepted/In press - 7 Mar 2019
Event International Joint Conference on Neural Networks (IJCNN 2019) - Budapest, Hungary
Duration: 14 Jul 201919 Jul 2019

Conference

Conference International Joint Conference on Neural Networks (IJCNN 2019)
CountryHungary
CityBudapest
Period14/07/1919/07/19

Keywords

  • Error analysis, Random projection, Multi-layer perceptron