Abstract
Representation learning plays a crucial role in automated feature selection, particularly in the context of high-dimensional data, where non-parametric methods often struggle. In this study, we focus on supervised learning scenarios where the pertinent information resides within a lower-dimensional linear subspace of the data, namely the multi-index model. If this subspace were known, it would greatly enhance prediction, computation, and interpretation. To address this challenge, we propose a novel method for joint linear feature learning and non-parametric function estimation, aimed at more effectively leveraging hidden features for learning. Our approach employs empirical risk minimisation, augmented with a penalty on function derivatives, ensuring versatility. Leveraging the orthogonality and rotation invariance properties of Hermite polynomials, we introduce our estimator, named RegFeaL. By using alternative minimisation, we iteratively rotate the data to improve alignment with leading directions. We establish that the expected risk of our method converges in high-probability to the minimal risk under minimal assumptions and with explicit rates. Additionally, we provide empirical results demonstrating the performance of RegFeaL in various experiments.
Funding Statement
This work is funded in part by the French government under the management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute). We also acknowledge support from the European Research Council (grants SEQUOIA 724063 and DYNASTY 101039676).
Acknowledgments
The author thanks Lawrence Stewart, Antonin Brossollet and Oumayma Bounou for fruitful discussions related to this work. The authors are grateful to the CLEPS infrastructure from INRIA PARIS for providing resources and support, particularly Simon Legrand (https://paris-cluster-2019.gitlabpages.inria.fr/cleps/cleps-userguide/index.html).
Citation
Bertille Follain. Francis Bach. "Nonparametric linear feature learning in regression through regularisation." Electron. J. Statist. 18 (2) 4075 - 4118, 2024. https://doi.org/10.1214/24-EJS2301
Information