TY - JOUR
T1 - Learning Beyond Finite Memory in Recurrent Networks Of Spiking Neurons
AU - Tino, Peter
AU - Mills, A
PY - 2006/3/1
Y1 - 2006/3/1
N2 - We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.
AB - We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.
UR - http://www.scopus.com/inward/record.url?scp=33645701621&partnerID=8YFLogxK
U2 - 10.1162/neco.2006.18.3.591
DO - 10.1162/neco.2006.18.3.591
M3 - Article
C2 - 16483409
SN - 1530-888X
SN - 1530-888X
SN - 1530-888X
SN - 1530-888X
SN - 1530-888X
SN - 1530-888X
VL - 18
SP - 591
EP - 613
JO - Neural Computation
JF - Neural Computation
IS - 3
ER -