April 2021 Linearized two-layers neural networks in high dimension
Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, Andrea Montanari
Author Affiliations +
Ann. Statist. 49(2): 1029-1054 (April 2021). DOI: 10.1214/20-AOS1990


We consider the problem of learning an unknown function f on the d-dimensional sphere with respect to the square loss, given i.i.d. samples {(yi,xi)}in where xi is a feature vector uniformly distributed on the sphere and yi=f(xi)+εi. We study two popular classes of models that can be regarded as linearizations of two-layers neural networks around a random initialization: the random features model of Rahimi–Recht (RF); the neural tangent model of Jacot–Gabriel–Hongler (NT). Both these models can also be regarded as randomized approximations of kernel ridge regression (with respect to different kernels), and enjoy universal approximation properties when the number of neurons N diverges, for a fixed dimension d.

We consider two specific regimes: the infinite-sample finite-width regime, in which n= while d and N are large but finite, and the infinite-width finite-sample regime in which N= while d and n are large but finite. In the first regime, we prove that if d+δNd+1δ for small δ>0, then RF effectively fits a degree- polynomial in the raw features, and NT fits a degree-(+1) polynomial. In the second regime, both RF and NT reduce to kernel methods with rotationally invariant kernels. We prove that, if the sample size satisfies d+δnd+1δ, then kernel methods can fit at most a degree- polynomial in the raw features. This lower bound is achieved by kernel ridge regression, and near-optimal prediction error is achieved for vanishing ridge regularization.


Download Citation

Behrooz Ghorbani. Song Mei. Theodor Misiakiewicz. Andrea Montanari. "Linearized two-layers neural networks in high dimension." Ann. Statist. 49 (2) 1029 - 1054, April 2021. https://doi.org/10.1214/20-AOS1990


Received: 1 July 2019; Revised: 1 June 2020; Published: April 2021
First available in Project Euclid: 2 April 2021

Digital Object Identifier: 10.1214/20-AOS1990

Primary: 62G08
Secondary: 62J07

Keywords: approximation bounds , kernel ridge regression , neural tangent kernel , random features , Two-layers neural networks

Rights: Copyright © 2021 Institute of Mathematical Statistics


This article is only available to subscribers.
It is not available for individual sale.

Vol.49 • No. 2 • April 2021
Back to Top