Abstract
Minimax $L_{2}$ risks for high-dimensional nonparametric regression are derived under two sparsity assumptions: (1) the true regression surface is a sparse function that depends only on $d=O(\log n)$ important predictors among a list of $p$ predictors, with $\log p=o(n)$; (2) the true regression surface depends on $O(n)$ predictors but is an additive function where each additive component is sparse but may contain two or more interacting predictors and may have a smoothness level different from other components. For either modeling assumption, a practicable extension of the widely used Bayesian Gaussian process regression method is shown to adaptively attain the optimal minimax rate (up to $\log n$ terms) asymptotically as both $n,p\to\infty$ with $\log p=o(n)$.
Citation
Yun Yang. Surya T. Tokdar. "Minimax-optimal nonparametric regression in high dimensions." Ann. Statist. 43 (2) 652 - 674, April 2015. https://doi.org/10.1214/14-AOS1289
Information