Open Access
Translator Disclaimer
August 2003 Accelerated randomized stochastic optimization
Jürgen Dippon
Ann. Statist. 31(4): 1260-1281 (August 2003). DOI: 10.1214/aos/1059655913


We propose a general class of randomized gradient estimates to be employed in a recursive search for the minimum of an unknown multivariate regression function. Here only two observations per iteration step are used. Special cases include random direction stochastic approximation (Kushner and Clark), simultaneous perturbation stochastic approximation (Spall) and a special kernel based stochastic approximation method (Polyak and Tsybakov). If the unknown regression is p-smooth ($p\ge 2$) at the point of minimum, these methods achieve the optimal rate of convergence $O(n^{-(p-1)/(2p)})$. For both the classical stochastic approximation scheme (Kiefer and Wolfowitz) and the averaging scheme (Ruppert and Polyak) the related asymptotic distributions are computed.


Download Citation

Jürgen Dippon. "Accelerated randomized stochastic optimization." Ann. Statist. 31 (4) 1260 - 1281, August 2003.


Published: August 2003
First available in Project Euclid: 31 July 2003

zbMATH: 1105.62370
MathSciNet: MR2001650
Digital Object Identifier: 10.1214/aos/1059655913

Primary: 62L20

Keywords: asymptotic normality , gradient estimation , Optimal rates of convergence , Randomization , stochastic approximation , stochastic optimization

Rights: Copyright © 2003 Institute of Mathematical Statistics


Vol.31 • No. 4 • August 2003
Back to Top