April 2024 The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
Hamed Hassani, Adel Javanmard
Author Affiliations +
Ann. Statist. 52(2): 441-465 (April 2024). DOI: 10.1214/24-AOS2353


Successful deep learning models often involve training neural network architectures that contain more parameters than the number of training samples. Such overparametrized models have recently been extensively studied, and the virtues of overparametrization have been established from both the statistical perspective, via the double-descent phenomenon, and the computational perspective via the structural properties of the optimization landscape. Despite this success, it is also well known that these models are highly vulnerable to small adversarial perturbations in their inputs. Even when adversarially trained, their performance on perturbed inputs (robust generalization) is considerably worse than their best attainable performance on benign inputs (standard generalization). It is thus imperative to understand how overparametrization fundamentally affects robustness.

In this paper, we will provide a precise characterization of the role of overparametrization on robustness by focusing on random features regression models (two-layer neural networks with random first layer weights). We consider a regime where the sample size, the input dimension and the number of parameters grow proportionally, and derive an asymptotically exact formula for the robust generalization error when the model is adversarially trained. Our developed theory reveals the nontrivial effect of overparametrization on robustness and indicates that high overparametrization can hurt robust generalization.

Funding Statement

The research of H. Hassani is supported by the NSF CAREER award, AFOSR YIP, the Intel Rising Star award, as well as the AI Institute for Learning-Enabled Optimization at Scale (TILOS).
A. Javanmard is partially supported by the Sloan Research Fellowship in mathematics, an Adobe Data Science Faculty Research Award, the NSF CAREER Award DMS-1844481 and NSF Award DMS-2311024.


The authors thank Alexander Robey for interesting discussions and feedback on an early draft.


Download Citation

Hamed Hassani. Adel Javanmard. "The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression." Ann. Statist. 52 (2) 441 - 465, April 2024. https://doi.org/10.1214/24-AOS2353


Received: 1 April 2023; Revised: 1 October 2023; Published: April 2024
First available in Project Euclid: 9 May 2024

Digital Object Identifier: 10.1214/24-AOS2353

Primary: 62E20 , 62F12
Secondary: 62F35

Keywords: adversarial training , Gaussian equivalence property , Precise high-dimensional asymptotics , random features models

Rights: Copyright © 2024 Institute of Mathematical Statistics


This article is only available to subscribers.
It is not available for individual sale.

Vol.52 • No. 2 • April 2024
Back to Top