Open Access
February 2021 Asymptotic optimality in stochastic optimization
John C. Duchi, Feng Ruan
Ann. Statist. 49(1): 21-48 (February 2021). DOI: 10.1214/19-AOS1831

Abstract

We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of Hájek and Le Cam for classical statistical problems. We give complementary optimality results, developing fully online methods that adaptively achieve optimal convergence guarantees. Our results provide function-specific lower bounds and convergence results that make precise a correspondence between statistical difficulty and the geometric notion of tilt-stability from optimization. As part of this development, we show how variants of Nesterov’s dual averaging—a stochastic gradient-based procedure—guarantee finite time identification of constraints in optimization problems, while stochastic gradient procedures fail. Additionally, we highlight a gap between problems with linear and nonlinear constraints: standard stochastic-gradient-based procedures are suboptimal even for the simplest nonlinear constraints, necessitating the development of asymptotically optimal Riemannian stochastic gradient methods.

Citation

Download Citation

John C. Duchi. Feng Ruan. "Asymptotic optimality in stochastic optimization." Ann. Statist. 49 (1) 21 - 48, February 2021. https://doi.org/10.1214/19-AOS1831

Information

Received: 1 August 2017; Revised: 1 November 2018; Published: February 2021
First available in Project Euclid: 29 January 2021

Digital Object Identifier: 10.1214/19-AOS1831

Subjects:
Primary: 62F10 , 62F12 , 68Q25

Keywords: convex analysis , Local asymptotic minimax theory , manifold identification , stochastic gradients

Rights: Copyright © 2021 Institute of Mathematical Statistics

Vol.49 • No. 1 • February 2021
Back to Top