Abstract
Recursive least square (RLS) is an efficient approach to neural network training. However, in the classical RLS algorithm, there is no explicit decay in the energy function. This will lead to an unsatisfactory generalization ability for the trained networks. In this paper, we propose a generalized RLS (GRLS) model which includes a general decay term in the energy function for the training of feedforward neural networks. In particular, four different weight decay functions, namely, the quadratic weight decay, the constant weight decay and the newly proposed multimodal and quartic weight decay are discussed. By using the GRLS approach, not only the generalization ability of the trained networks is significantly improved but more unnecessary weights are pruned to obtain a compact network. Furthermore, the computational complexity of the GRLS remains the same as that of the standard RLS algorithm. The advantages and tradeoffs of using different decay functions are analyzed and then demonstrated with examples. Simulation results show that our approach is able to meet the design goals: improving the generalization ability of the trained network while getting a compact network.
Original language | English |
---|---|
Pages (from-to) | 19-34 |
Number of pages | 16 |
Journal | IEEE Transactions on Neural Networks |
Volume | 17 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2006 |
Keywords
- recursive least square (RLS) algorithm
- neural network
- extended Kalman filtering (EKF)
- weight decay