Open Access
2018 Gaussian process bandits with adaptive discretization
Shubhanshu Shekhar, Tara Javidi
Electron. J. Statist. 12(2): 3829-3874 (2018). DOI: 10.1214/18-EJS1497


In this paper, the problem of maximizing a black-box function $f:\mathcal{X}\to \mathbb{R}$ is studied in the Bayesian framework with a Gaussian Process prior. In particular, a new algorithm for this problem is proposed, and high probability bounds on its simple and cumulative regret are established. The query point selection rule in most existing methods involves an exhaustive search over an increasingly fine sequence of uniform discretizations of $\mathcal{X}$. The proposed algorithm, in contrast, adaptively refines $\mathcal{X}$ which leads to a lower computational complexity, particularly when $\mathcal{X}$ is a subset of a high dimensional Euclidean space. In addition to the computational gains, sufficient conditions are identified under which the regret bounds of the new algorithm improve upon the known results. Finally, an extension of the algorithm to the case of contextual bandits is proposed, and high probability bounds on the contextual regret are presented.


Download Citation

Shubhanshu Shekhar. Tara Javidi. "Gaussian process bandits with adaptive discretization." Electron. J. Statist. 12 (2) 3829 - 3874, 2018.


Received: 1 January 2018; Published: 2018
First available in Project Euclid: 4 December 2018

zbMATH: 07003231
MathSciNet: MR3882941
Digital Object Identifier: 10.1214/18-EJS1497

Keywords: Bandits , Bayesian optimization , Gaussian processes

Vol.12 • No. 2 • 2018
Back to Top