Open Access
August 2023 Relaxing the i.i.d. assumption: Adaptively minimax optimal regret via root-entropic regularization
Blair Bilodeau, Jeffrey Negrea, Daniel M. Roy
Author Affiliations +
Ann. Statist. 51(4): 1850-1876 (August 2023). DOI: 10.1214/23-AOS2315

Abstract

We consider prediction with expert advice when data are generated from distributions varying arbitrarily within an unknown constraint set. This semi-adversarial setting includes (at the extremes) the classical i.i.d. setting, when the unknown constraint set is restricted to be a singleton, and the unconstrained adversarial setting, when the constraint set is the set of all distributions. The Hedge algorithm—long known to be minimax (rate) optimal in the adversarial regime—was recently shown to be simultaneously minimax optimal for i.i.d. data. In this work, we propose to relax the i.i.d. assumption by seeking adaptivity at all levels of a natural ordering on constraint sets. We provide matching upper and lower bounds on the minimax regret at all levels, show that Hedge with deterministic learning rates is suboptimal outside of the extremes and prove that one can adaptively obtain minimax regret at all levels. We achieve this optimal adaptivity using the follow-the-regularized-leader (FTRL) framework, with a novel adaptive regularization scheme that implicitly scales as the square root of the entropy of the current predictive distribution, rather than the entropy of the initial predictive distribution. Finally, we provide novel technical tools to study the statistical performance of FTRL along the semi-adversarial spectrum.

Funding Statement

BB was supported by an NSERC Canada Graduate Scholarship and the Vector Institute. JN was supported by an NSERC Vanier Canada Graduate Scholarship and the Vector Institute. DMR is supported in part by an NSERC Discovery Grant, Ontario Early Researcher Award, Canada CIFAR AI Chair funding through the Vector Institute, and a stipend provided by the Charles Simonyi Endowment. This material is based also upon work supported by the United States Air Force under Contract No. FA850-19-C-0511.

Acknowledgments

BB and JN are equal-contribution authors; order was determined randomly.

Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force. DR is also a faculty member of the Vector Institute. BB was a student affiliate at the Vector Institute while this work was undertaken. JN was a student affiliate at the Vector Institute when this work began, and is now a faculty affiliate there. This research was partially carried out while all three authors were visiting the Institute for Advanced Study in Princeton, New Jersey, for the Special Year on Optimization, Statistics and Theoretical Machine Learning. JN and BB’s travel to the Institute for Advanced Study were separately funded by NSERC Michael Smith Foreign Study Supplements. We thank Nicolò Campolongo, Peter D. Grünwald, Teodor Vanislavov Marinov, Francesco Orabona, Alex Stringer, Csaba Szepesvári, Yanbo Tang and Julian Zimmert for their insightful comments on preliminary versions of this work.

Citation

Download Citation

Blair Bilodeau. Jeffrey Negrea. Daniel M. Roy. "Relaxing the i.i.d. assumption: Adaptively minimax optimal regret via root-entropic regularization." Ann. Statist. 51 (4) 1850 - 1876, August 2023. https://doi.org/10.1214/23-AOS2315

Information

Received: 1 July 2022; Revised: 1 July 2023; Published: August 2023
First available in Project Euclid: 19 October 2023

Digital Object Identifier: 10.1214/23-AOS2315

Subjects:
Primary: 62C20 , 62L10
Secondary: 60G25 , 62M20 , 68Q32 , 68T05

Keywords: adaptive minimax regret , Aggregation , prediction with expert advice , robust prediction , sequential decision theory

Rights: Copyright © 2023 Institute of Mathematical Statistics

Vol.51 • No. 4 • August 2023
Back to Top