Abstract
In this paper, we propose a general framework for stochastic online convex optimization that allows for achieving fast-rate stochastic regret bounds. Specifically, we demonstrate that certain algorithms, including online Newton steps and a scale-free variant of Bernstein online aggregation, achieve the best-known rates in unbounded stochastic settings. To illustrate the usefulness of our approach, we apply it to calibrating parametric probabilistic forecasters of non-stationary sub-Gaussian time series. Importantly, our fast-rate stochastic regret bounds are valid at any time, providing a flexible and robust performance metric for sequential algorithms. Our proofs rely on combining self-bounded and Poissonian inequalities for martingales and sub-Gaussian random variables, respectively, under a stochastic exp-concavity assumption.
Acknowledgments
The author would like to thank the Institut für Statistik und Operations Research at Vienna University for its kind hospitality. He is also grateful to an anonymous Referee for helpful comments and suggestions on an earlier version of the paper.
Citation
Olivier Wintenberger. "Stochastic online convex optimization. Application to probabilistic time series forecasting." Electron. J. Statist. 18 (1) 429 - 464, 2024. https://doi.org/10.1214/23-EJS2208
Information