Open Access
June 2012 Deviation optimal learning using greedy $Q$-aggregation
Dong Dai, Philippe Rigollet, Tong Zhang
Ann. Statist. 40(3): 1878-1905 (June 2012). DOI: 10.1214/12-AOS1025

Abstract

Given a finite family of functions, the goal of model selection aggregation is to construct a procedure that mimics the function from this family that is the closest to an unknown regression function. More precisely, we consider a general regression model with fixed design and measure the distance between functions by the mean squared error at the design points. While procedures based on exponential weights are known to solve the problem of model selection aggregation in expectation, they are, surprisingly, sub-optimal in deviation. We propose a new formulation called $Q$-aggregation that addresses this limitation; namely, its solution leads to sharp oracle inequalities that are optimal in a minimax sense. Moreover, based on the new formulation, we design greedy $Q$-aggregation procedures that produce sparse aggregation models achieving the optimal rate. The convergence and performance of these greedy procedures are illustrated and compared with other standard methods on simulated examples.

Citation

Download Citation

Dong Dai. Philippe Rigollet. Tong Zhang. "Deviation optimal learning using greedy $Q$-aggregation." Ann. Statist. 40 (3) 1878 - 1905, June 2012. https://doi.org/10.1214/12-AOS1025

Information

Published: June 2012
First available in Project Euclid: 16 October 2012

zbMATH: 1257.62037
MathSciNet: MR3015047
Digital Object Identifier: 10.1214/12-AOS1025

Subjects:
Primary: 62G08
Secondary: 62G05 , 62G20 , 90C52

Keywords: deviation bounds , deviation suboptimality , Exponential weights , greedy algorithm , lower bounds , model averaging , Model selection , Oracle inequalities , regression

Rights: Copyright © 2012 Institute of Mathematical Statistics

Vol.40 • No. 3 • June 2012
Back to Top