Input-to-state representation in linear reservoirs dynamics

Pietro Verzelli, Cesare Alippi, Lorenzo Livi, Peter Tino

Research output: Contribution to journalArticlepeer-review

150 Downloads (Pure)

Abstract

Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-independent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favorable results as verified by practitioners.

Original languageEnglish
JournalIEEE Transactions on Neural Networks and Learning Systems
Early online date2 Mar 2021
DOIs
Publication statusE-pub ahead of print - 2 Mar 2021

Keywords

  • Dynamical systems
  • echo state networks (ESNs)
  • recurrent neural networks (RNNs)
  • reservoir computing

Fingerprint

Dive into the research topics of 'Input-to-state representation in linear reservoirs dynamics'. Together they form a unique fingerprint.

Cite this