The Annals of Statistics

On universal oracle inequalities related to high-dimensional linear models

Yuri Golubev

Full-text: Open access

Abstract

This paper deals with recovering an unknown vector θ from the noisy data Y =  + σξ, where A is a known (m × n)-matrix and ξ is a white Gaussian noise. It is assumed that n is large and A may be severely ill-posed. Therefore, in order to estimate θ, a spectral regularization method is used, and our goal is to choose its regularization parameter with the help of the data Y. For spectral regularization methods related to the so-called ordered smoothers [see Kneip Ann. Statist. 22 (1994) 835–866], we propose new penalties in the principle of empirical risk minimization. The heuristical idea behind these penalties is related to balancing excess risks. Based on this approach, we derive a sharp oracle inequality controlling the mean square risks of data-driven spectral regularization methods.

Article information

Source
Ann. Statist., Volume 38, Number 5 (2010), 2751-2780.

Dates
First available in Project Euclid: 20 July 2010

Permanent link to this document
https://projecteuclid.org/euclid.aos/1279638539

Digital Object Identifier
doi:10.1214/10-AOS803

Mathematical Reviews number (MathSciNet)
MR2722455

Zentralblatt MATH identifier
1200.62074

Subjects
Primary: 62C10: Bayesian problems; characterization of Bayes procedures
Secondary: 62C10: Bayesian problems; characterization of Bayes procedures 62G05: Estimation

Keywords
Spectral regularization excess risk ordered smoother empirical risk minimization oracle inequality

Citation

Golubev, Yuri. On universal oracle inequalities related to high-dimensional linear models. Ann. Statist. 38 (2010), no. 5, 2751--2780. doi:10.1214/10-AOS803. https://projecteuclid.org/euclid.aos/1279638539


Export citation

References

  • Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Proc. 2nd Intern. Symp. Inf. Theory (P. N. Petrov and F. Csaki, eds.) 267–281. Akadémiai Kiadó, Budapest.
  • Bauer, F. and Hohage, T. (2005). A Lepskij-type stopping rule for regularized Newton methods. Inverse Problems 21 1975–1991.
  • Bissantz, N., Hohage, T., Munk, A. and Ruymgaart, F. (2007). Convergence rates of general regularization methods for statistical inverse problems and applications. SIAM J. Numer. Anal. 45 2610–2636.
  • Cavalier, L. and Golubev, Y. (2006). Risk hull method and regularization by projections of ill-posed inverse problems. Ann. Statist. 34 1653–1677.
  • Engl, H. W., Hanke, M. and Neubauer, A. (1996). Regularization of Inverse Problems. Mathematics and Its Applications 375. Kluwer Academic, Dordrecht.
  • Golubev, Y. (2004). The principle of penalized empirical risk in severely ill-posed problems. Probab. Theory Related Fields 130 18–38.
  • Kneip, A. (1994). Ordered linear smoothers. Ann. Statist. 22 835–866.
  • Landweber, L. (1951). An iteration formula for Fredholm integral equations of the first kind. Amer. J. Math. 73 615–624.
  • Loubes, J.-M. and Ludeña, C. (2008). Adaptive complexity regularization for linear inverse problems. Electron. J. Stat. 2 661–677.
  • Mair, B. A. and Ruymgaart, F. H. (1996). Statistical inverse estimation in Hilbert scale. SIAM J. Appl. Math. 56 1424–1444.
  • Mathé, P. (2006). The Lepskii principle revisited. Inverse Problems 22 L11–L15.
  • O’Sullivan, F. (1986). A statistical perspective on ill-posed inverse problems (with discussion). Statist. Sci. 1 502–527.
  • Pinsker, M. S. (1980). Optimal filtration of square-integrable signals in Gaussian noise. Problems Inform. Transmission 16 52–68.
  • Tikhonov, A. N. and Arsenin, V. A. (1977). Solutions of Ill-Posed Problems. Wiley, New York.
  • Van der Vaart, A. and Wellner, J. A. (1996). Weak Convergence and Empirical Processes. Springer, New York.