Team, Visitors, External Collaborators
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: New Software and Platforms


Keywords: Recurrent network - Artificial intelligence - Reservoir Computing - Multi-label classification - Timeseries Prediction - Time Series - Machine learning - Classification

Functional Description: This toolbox provides a class of Echo State Networks that can be used with Python and its scientific librairies like Numpy, Scipy and Matplolib. It includes useful expertise to train recurrent neural networks of ESN architecture kind.

ESN is a particular kind of recurrent neural network (RNN) with or without leaky neurons. The input stream is projected to a random recurrent layer and a linear output layer (called "read-out") is modified by learning (which can also be done in an online fashion).

Compared to other RNNs, the input layer and the recurrent layer (called "reservoir") do not need to be trained. For other RNNs, the structure of the recurrent layer evolves in most cases by gradient descent algorithms like Backpropagation-Through-Time, which is not biologically plausible and is adapted iteratively to be able to hold a representaion of the input sequence. In contrast, the random weights of the ESN's reservoir are not trained, but adapted to possess the "Echo State Property" (ESP) or at least suitable dynamics (e.g. 'edge of chaos') to generalize, which includes a non-linear transformation of the input that can be learned by a linear classifier. The weights are adapted by scaling the weights based on the maximum absolute eigenvalue (also called spectral radius), which is a hyperparameter specific to the task. The states of the reservoir are linearly separable and can be mapped to the output layer by a computationally cheap linear regression, as no gradient descent is necessary. The weights of the input layer can be scaled by the input scaling hyperparameter, which also depends on the nature of the inputs.