Abstract
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights. These methods, in which only the last layer of weights and a few hyperparameters are optimized, have been successfully applied in a wide range of static and dynamic learning problems. Despite the popularity of this approach in empirical tasks, important theoretical questions regarding the relation between the unknown function, the weight distribution, and the approximation rate have remained open. In this work it is proved that, as long as the unknown function, functional, or dynamical system is sufficiently regular, it is possible to draw the internal weights of the random (recurrent) neural network from a generic distribution (not depending on the unknown object) and quantify the error in terms of the number of neurons and the hyperparameters. In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well and thus provides the first mathematical explanation for their empirically observed success at learning dynamical systems.
Funding Statement
Lukas G. and J.P.O. acknowledge partial financial support coming from the Research Commission of the Universität Sankt Gallen and the Swiss National Science Foundation (grant number 200021_175801/1). Lyudmila G. acknowledges partial financial support of the Graduate School of Decision Sciences of the Universität Konstanz. J.P.O. acknowledges partial financial support of the French ANR “BIPHOPROC” project (ANR-14-OHRI-0002-02).
Acknowledgments
We thank Josef Teichmann for fruitful discussions that helped in improving the paper. The three authors thank the hospitality and the generosity of the FIM at ETH Zurich where a significant portion of the results in this paper were obtained.
Citation
Lukas Gonon. Lyudmila Grigoryeva. Juan-Pablo Ortega. "Approximation bounds for random neural networks and reservoir systems." Ann. Appl. Probab. 33 (1) 28 - 69, February 2023. https://doi.org/10.1214/22-AAP1806
Information