Open Access
April 2018 A random matrix approach to neural networks
Cosme Louart, Zhenyu Liao, Romain Couillet
Ann. Appl. Probab. 28(2): 1190-1248 (April 2018). DOI: 10.1214/17-AAP1328

Abstract

This article studies the Gram random matrix model $G=\frac{1}{T}\Sigma^{{\mathsf{T}}}\Sigma$, $\Sigma=\sigma(WX)$, classically found in the analysis of random feature maps and random neural networks, where $X=[x_{1},\ldots,x_{T}]\in\mathbb{R}^{p\times T}$ is a (data) matrix of bounded norm, $W\in\mathbb{R}^{n\times p}$ is a matrix of independent zero-mean unit variance entries and $\sigma:\mathbb{R}\to\mathbb{R}$ is a Lipschitz continuous (activation) function—$\sigma(WX)$ being understood entry-wise. By means of a key concentration of measure lemma arising from nonasymptotic random matrix arguments, we prove that, as $n,p,T$ grow large at the same rate, the resolvent $Q=(G+\gamma I_{T})^{-1}$, for $\gamma>0$, has a similar behavior as that met in sample covariance matrix models, involving notably the moment $\Phi=\frac{T}{n}{\mathrm{E}}[G]$, which provides in passing a deterministic equivalent for the empirical spectral measure of $G$. Application-wise, this result enables the estimation of the asymptotic performance of single-layer random neural networks. This in turn provides practical insights into the underlying mechanisms into play in random neural networks, entailing several unexpected consequences, as well as a fast practical means to tune the network hyperparameters.

Citation

Download Citation

Cosme Louart. Zhenyu Liao. Romain Couillet. "A random matrix approach to neural networks." Ann. Appl. Probab. 28 (2) 1190 - 1248, April 2018. https://doi.org/10.1214/17-AAP1328

Information

Received: 1 February 2017; Revised: 1 June 2017; Published: April 2018
First available in Project Euclid: 11 April 2018

zbMATH: 06897953
MathSciNet: MR3784498
Digital Object Identifier: 10.1214/17-AAP1328

Subjects:
Primary: 60B20
Secondary: 62M45

Keywords: neural networks , random feature maps , Random matrix theory

Rights: Copyright © 2018 Institute of Mathematical Statistics

Vol.28 • No. 2 • April 2018
Back to Top