Open Access
2022 A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models
TrungTin Nguyen, Hien Duy Nguyen, Faicel Chamroukhi, Florence Forbes
Author Affiliations +
Electron. J. Statist. 16(2): 4742-4822 (2022). DOI: 10.1214/22-EJS2057

Abstract

Mixture of experts (MoE) are a popular class of statistical and machine learning models that have gained attention over the years due to their flexibility and efficiency. In this work, we consider Gaussian-gated localized MoE (GLoME) and block-diagonal covariance localized MoE (BLoME) regression models to present nonlinear relationships in heterogeneous data with potential hidden graph-structured interactions between high-dimensional predictors. These models pose difficult statistical estimation and model selection questions, both from a computational and theoretical perspective. This paper is devoted to the study of the problem of model selection among a collection of GLoME or BLoME models characterized by the number of mixture components, the complexity of Gaussian mean experts, and the hidden block-diagonal structures of the covariance matrices, in a penalized maximum likelihood estimation framework. In particular, we establish non-asymptotic risk bounds that take the form of weak oracle inequalities, provided that lower bounds for the penalties hold. The good empirical behavior of our models is then demonstrated on synthetic and real datasets.

Funding Statement

This work is partially supported by the French Ministry of Higher Education and Research (MESRI), French National Research Agency (ANR) grant SMILES ANR-18-CE40-0014, Australian Research Council grant number DP180101192, and the Inria LANDER project.

Acknowledgments

TrungTin Nguyen is supported by a “Contrat doctoral” from the French Ministry of Higher Education and Research. Faicel Chamroukhi is granted by the French National Research Agency (ANR) grant SMILES ANR-18-CE40-0014. Hien Duy Nguyen is funded by Australian Research Council grant number DP180101192. This research is funded directly by the Inria LANDER project. TrungTin Nguyen also sincerely acknowledges Inria Grenoble-Rhône-Alpes Research Centre for a valuable Visiting PhD Fellowship working with STATIFY team so that this research can be completed, Erwan LE PENNEC and Lucie Montuelle for providing the simulations for the SGaME models. Finally, we thank the Editor-in-Chief, Associate Editor, and Reviewers for their valuable comments, which enabled us to produce a much improved manuscript.

Citation

Download Citation

TrungTin Nguyen. Hien Duy Nguyen. Faicel Chamroukhi. Florence Forbes. "A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models." Electron. J. Statist. 16 (2) 4742 - 4822, 2022. https://doi.org/10.1214/22-EJS2057

Information

Received: 1 May 2021; Published: 2022
First available in Project Euclid: 27 September 2022

MathSciNet: MR4489239
zbMATH: 07603097
Digital Object Identifier: 10.1214/22-EJS2057

Subjects:
Primary: 62E17 , 62H30
Secondary: 62H12

Keywords: block-diagonal covariance matrix , clustering , Gaussian locally-linear mapping models , graphical lasso , linear cluster-weighted models , mixture of experts , mixture of regressions , Model selection , Oracle inequality , penalized maximum likelihood

Vol.16 • No. 2 • 2022
Back to Top