Open Access
December 2020 Analysis of a two-layer neural network via displacement convexity
Adel Javanmard, Marco Mondelli, Andrea Montanari
Ann. Statist. 48(6): 3619-3642 (December 2020). DOI: 10.1214/20-AOS1945

Abstract

Fitting a function by using linear combinations of a large number $N$ of “simple” components is one of the most fruitful ideas in statistical learning. This idea lies at the core of a variety of methods, from two-layer neural networks to kernel regression, to boosting. In general, the resulting risk minimization problem is nonconvex and is solved by gradient descent or its variants. Unfortunately, little is known about global convergence properties of these approaches.

Here, we consider the problem of learning a concave function $f$ on a compact convex domain $\Omega \subset {\mathbb{R}}^{d}$, using linear combinations of “bump-like” components (neurons). The parameters to be fitted are the centers of $N$ bumps, and the resulting empirical risk minimization problem is highly nonconvex. We prove that, in the limit in which the number of neurons diverges, the evolution of gradient descent converges to a Wasserstein gradient flow in the space of probability distributions over $\Omega $. Further, when the bump width $\delta $ tends to $0$, this gradient flow has a limit which is a viscous porous medium equation. Remarkably, the cost function optimized by this gradient flow exhibits a special property known as displacement convexity, which implies exponential convergence rates for $N\to \infty $, $\delta \to 0$.

Surprisingly, this asymptotic theory appears to capture well the behavior for moderate values of $\delta $, $N$. Explaining this phenomenon, and understanding the dependence on $\delta $, $N$ in a quantitative manner remains an outstanding challenge.

Citation

Download Citation

Adel Javanmard. Marco Mondelli. Andrea Montanari. "Analysis of a two-layer neural network via displacement convexity." Ann. Statist. 48 (6) 3619 - 3642, December 2020. https://doi.org/10.1214/20-AOS1945

Information

Received: 1 January 2019; Revised: 1 December 2019; Published: December 2020
First available in Project Euclid: 11 December 2020

MathSciNet: MR4185822
Digital Object Identifier: 10.1214/20-AOS1945

Subjects:
Primary: 62F10 , 62J02
Secondary: 62H12

Keywords: convergence rate , displacement convexity , function regression , neural networks , Stochastic gradient descent , Wasserstein gradient flow

Rights: Copyright © 2020 Institute of Mathematical Statistics

Vol.48 • No. 6 • December 2020
Back to Top