Abstract
We perform a detailed fixed-point analysis of two-unit recurrent neural networks with sigmoid-shaped transfer functions. Using geometrical arguments in the space of transfer function derivatives, we partition the network state-space into distinct regions corresponding to stability types of the fixed points. Unlike in the previous studies, we do not assume any special form of connectivity pattern between the neurons, and all free parameters are allowed to vary. We also prove that when both neurons have excitatory self-connections and the mutual interaction pattern is the same (i.e., the neurons mutually inhibit or excite themselves), new attractive fixed points are created through the saddle-node bifurcation. Finally, for an N-neuron recurrent network, we give lower bounds on the rate of convergence of attractive periodic points toward the saturation values of neuron activations, as the absolute values of connection weights grow.
Original language | English |
---|---|
Pages (from-to) | 1379-1414 |
Number of pages | 36 |
Journal | Neural Computation |
Volume | 13 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1 Jun 2001 |