November 2024 Penalized spline estimation of principal components for sparse functional data: Rates of convergence
Shiyuan He, Jianhua Z. Huang, Kejun He
Author Affiliations +
Bernoulli 30(4): 2795-2820 (November 2024). DOI: 10.3150/23-BEJ1695

Abstract

This paper gives a comprehensive treatment of the convergence rates of penalized spline estimators for simultaneously estimating several leading principal component functions, when the functional data is sparsely observed. The penalized spline estimators are defined as the solution of a penalized empirical risk minimization problem, where the loss function belongs to a general class of loss functions motivated by the matrix Bregman divergence, and the penalty term is the integrated squared derivative. The theory reveals that the asymptotic behavior of penalized spline estimators depends on the interesting interplay between several factors, i.e., the smoothness of the unknown functions, the spline degree, the spline knot number, the penalty order, and the penalty parameter. The theory also classifies the asymptotic behavior into seven scenarios and characterizes whether and how the minimax optimal rates of convergence are achievable in each scenario.

Acknowledgments

The authors would like to thank the anonymous referees, the Associate Editor, and the Editor for their constructive comments that improved the quality of this paper. Kejun He (email: kejunhe@ruc.edu.cn) is the corresponding author.

Citation

Download Citation

Shiyuan He. Jianhua Z. Huang. Kejun He. "Penalized spline estimation of principal components for sparse functional data: Rates of convergence." Bernoulli 30 (4) 2795 - 2820, November 2024. https://doi.org/10.3150/23-BEJ1695

Information

Received: 1 December 2022; Published: November 2024
First available in Project Euclid: 30 July 2024

Digital Object Identifier: 10.3150/23-BEJ1695

Keywords: functional principal component analysis , manifold geometry , matrix Bregman divergence , roughness penalty

Vol.30 • No. 4 • November 2024
Back to Top