Annales de l'Institut Henri Poincaré, Probabilités et Statistiques

Iterative feature selection in least square regression estimation

Pierre Alquier

Full-text: Open access

Abstract

This paper presents a new algorithm to perform regression estimation, in both the inductive and transductive setting. The estimator is defined as a linear combination of functions in a given dictionary. Coefficients of the combinations are computed sequentially using projection on some simple sets. These sets are defined as confidence regions provided by a deviation (PAC) inequality on an estimator in one-dimensional models. We prove that every projection the algorithm actually improves the performance of the estimator. We give all the estimators and results at first in the inductive case, where the algorithm requires the knowledge of the distribution of the design, and then in the transductive case, which seems a more natural application for this algorithm as we do not need particular information on the distribution of the design in this case. We finally show a connection with oracle inequalities, making us able to prove that the estimator reaches minimax rates of convergence in Sobolev and Besov spaces.

Résumé

Cette article présente un nouvel algorithme d’estimation de régression, dans les contextes inductifs et transductifs. L’estimateur est défini par une combinaison linéaire de fonctions choisies dans un dictionnaire donné. Les coefficients de cette combinaison sont calculés par des projections successives sur des ensembles simples. Ces ensembles sont définis comme des régions de confiance données par une inégalité de déviation (ou inégalité PAC). On démontre en particulier que chaque projection au cours de l’algorithme améliore effectivement l’estimateur obtenu. On donne tout d’abord les résultats dans le contexte inductif, où l’algorithme nécessite la connaissance de la distribution du design, puis dans le contexte transductif, plus naturel ici puisque l’algorithme s’applique sans la connaissance de cette distribution. On établit finalement un lien avec les inégalités d’oracle, permettant de montrer que notre estimateur atteint les vitesses optimales dans les espaces de Sobolev et de Besov.

Article information

Source
Ann. Inst. H. Poincaré Probab. Statist. Volume 44, Number 1 (2008), 47-88.

Dates
First available in Project Euclid: 25 February 2008

Permanent link to this document
https://projecteuclid.org/euclid.aihp/1203969868

Digital Object Identifier
doi:10.1214/07-AIHP106

Mathematical Reviews number (MathSciNet)
MR2451571

Zentralblatt MATH identifier
1206.62067

Subjects
Primary: 62G08: Nonparametric regression
Secondary: 62G15: Tolerance and confidence regions 68T05: Learning and adaptive systems [See also 68Q32, 91E40]

Keywords
Regression estimation Statistical learning Confidence regions Thresholding methods Support vector machines

Citation

Alquier, Pierre. Iterative feature selection in least square regression estimation. Ann. Inst. H. Poincaré Probab. Statist. 44 (2008), no. 1, 47--88. doi:10.1214/07-AIHP106. https://projecteuclid.org/euclid.aihp/1203969868


Export citation

References

  • P. Alquier. Transductive and inductive adaptative inference for regression and density estimation. PhD thesis, University Paris 6, 2006.
  • J.-Y. Audibert. Aggregated estimators and empirical complexity for least square regression. Ann. Inst. H. Poincaré. Probab. Statist. 40 (2004) 685–736.
  • A. Barron, A. Cohen, W. Dahmen and R. DeVore. Adaptative approximation and learning by greedy algorithms. Preprint, 2006.
  • G. Blanchard, P. Massart, R. Vert and L. Zwald. Kernel projection machine: a new tool for pattern recognition. In Advances in Neural Inf. Proc. Systems (NIPS, 2004) 1649–1656, Mit Press, 2005.
  • B. E. Boser, I. M. Guyon and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pp. 144–152, ACM, 1992.
  • O. Catoni. A pac-Bayesian approach to adaptative classification. Preprint Laboratoire de Probabilités et Modèles Aléatoires, 2003.
  • O. Catoni. Statistical learning theory and stochastic optimization. Saint-Flour Summer School on Probability Theory. Lecture Notes in Math. 1851. Springer, Berlin, 2004.
  • O. Catoni. Improved Vapnik–Cervonenkis bounds. Preprint Laboratoire de Probabilités et Modèles Aléatoires, 2005.
  • N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel Based Learning Methods. Cambridge University Press, 2000.
  • D. L. Donoho and I. M. Johnstone. Ideal spatial adaptation by wavelets. Biometrika 81 (1994) 425–455.
  • D. L. Donoho, I. M. Johnstone, G. Kerkyacharian and D. Picard. Density estimation by wavelet thresholding. Ann. Statist. 24 (1996) 508–539.
  • W. Härdle, G. Kerkyacharian, D. Picard and A. B. Tsybakov. Wavelets, Approximations and Statistical Applications 129. Springer, New York, 1998.
  • A. Juditsky, A. Nazin, A. Tsybakov and N. Vayatis. Recursive aggregation of estimators via the mirror descent algorithm with averaging. Probl. Inf. Transm. 41 (2005) 368–384.
  • A. Juditsky, P. Rigollet and A. Tsybakov. Mirror averaging, aggregation and model selection. In Meeting on Statistical and Probabilistic Methods of Model Selection, pp. 2688–2691. Oberwolfach reports, 2005.
  • G. Kerkyacharian and D. Picard. Regression in random design and warped wavelets. Bernoulli 10 (2004) 1053–1105.
  • A. Nemirovski. Topics in non-parametric statistics. Saint-Flour Summer School on Probability Theory 85–277. Lecture Notes in Math. 1738. Springer, Berlin, 2000.
  • D. Panchenko. Symmetrization approach to concentration inequalities for empirical processes. Ann. Probab. 31 (2003) 2068–2081.
  • R. Schapire, Y. Freund, P. Bartlett and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. Ann. Statist. 26 (1998) 1651–1686.
  • B. Schölkopf, A. J. Smola and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10 (1998) 1299–1319.
  • M. Seeger. Pac-Bayesian generalization error bounds for Gaussian process classification. J. Mach. Learn. Res. 3 (2002) 233–269.
  • A. Tsybakov. Optimal aggregation of classifiers instatistical learning. Ann. Statist. 32 (2004) 135–156.
  • V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1998.
  • B. Widrow and M. Hoff. Adaptative switching circuits. In IRE WESCON Convention Record, Part 4, Computers: Man–Machine Systems, 96–104, 2005.
  • Y. Yang. Aggregating regression procedures to improve performances. Bernoulli 10 (2004) 25–47.