Echo State Networks are easy-to-train recurrent neural networks, a variant of Reservoir Computing.
The benefit of ESNs are they they are not tuned using back-progation, and as such, they offer many interesting opportunities. Geoffrey Hinton introduces Echo State Networks in this video, which is worth watching.
The ByteSumo implementation is a fork of the original pyESN code, that implements the Nature Scientific Reports paper: Echo State Networks with Self-Normalizing Activations on the Hyper-Sphere
(Many thanks to Pietro Verzelli for discussions and help in explaining things to me).
This ESN implementation has a new activation function that helps to stablise the ESN on the "Edge of Chaos" across the Spectral Radius range. It's a beta implementation, and it seems to work well. If you spot ways to improve my code, or create interesting example notebooks using this code, please offer a pull request and I'll update the repo.
Examples of training ESNs, having HyperSphere Activations:
Predicting Mackey Glass - testing ESNs having HyperSphere Activations
The new hypersphere activation functions allow for practical hyperparameter searching to tune Echo State Networks, and the example code linked below illustrates how to do it using DEAP. The example illustrates that using genetic search works very well!
Genetic Tuning of an ESN on the HyperSphere, via DEAP
One of our recent DEAP runs produced some tuned settings achieving a very low MSE, even much better than the screenshot. Try it out as an example:
MSE: 0.000290172577304054
# Tuned Parameters:
n_reservoir = 1000
projection = 1
noise = 0
rectifier = 1
steepness = 2
sparsity = 0.7686812449454254
sphere_radius = 35.86520316391459
teacher_forcing = True
random_state = 174
spectral_radius = 1.3472585851237922