Abstract
The asymptotic behavior of the stochastic gradient algorithm using biased gradient estimates is analyzed. Relying on arguments based on dynamic system theory (chain-recurrence) and differential geometry (Yomdin theorem and Lojasiewicz inequalities), upper bounds on the asymptotic bias of this algorithm are derived. The results hold under mild conditions and cover a broad class of algorithms used in machine learning, signal processing and statistics.
Citation
Vladislav B. Tadić. Arnaud Doucet. "Asymptotic bias of stochastic gradient search." Ann. Appl. Probab. 27 (6) 3255 - 3304, December 2017. https://doi.org/10.1214/16-AAP1272
Information