Open Access
2024 Stochastic online convex optimization. Application to probabilistic time series forecasting
Olivier Wintenberger
Author Affiliations +
Electron. J. Statist. 18(1): 429-464 (2024). DOI: 10.1214/23-EJS2208

Abstract

In this paper, we propose a general framework for stochastic online convex optimization that allows for achieving fast-rate stochastic regret bounds. Specifically, we demonstrate that certain algorithms, including online Newton steps and a scale-free variant of Bernstein online aggregation, achieve the best-known rates in unbounded stochastic settings. To illustrate the usefulness of our approach, we apply it to calibrating parametric probabilistic forecasters of non-stationary sub-Gaussian time series. Importantly, our fast-rate stochastic regret bounds are valid at any time, providing a flexible and robust performance metric for sequential algorithms. Our proofs rely on combining self-bounded and Poissonian inequalities for martingales and sub-Gaussian random variables, respectively, under a stochastic exp-concavity assumption.

Acknowledgments

The author would like to thank the Institut für Statistik und Operations Research at Vienna University for its kind hospitality. He is also grateful to an anonymous Referee for helpful comments and suggestions on an earlier version of the paper.

Citation

Download Citation

Olivier Wintenberger. "Stochastic online convex optimization. Application to probabilistic time series forecasting." Electron. J. Statist. 18 (1) 429 - 464, 2024. https://doi.org/10.1214/23-EJS2208

Information

Received: 1 April 2023; Published: 2024
First available in Project Euclid: 7 February 2024

Digital Object Identifier: 10.1214/23-EJS2208

Subjects:
Primary: 62L99 , 62M10

Keywords: probabilistic forecasting , sequential learning , time series prediction

Vol.18 • No. 1 • 2024
Back to Top