Annals of Statistics

Distributed estimation of principal eigenspaces

Jianqing Fan, Dong Wang, Kaizheng Wang, and Ziwei Zhu

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

Principal component analysis (PCA) is fundamental to statistical machine learning. It extracts latent principal factors that contribute to the most variation of the data. When data are stored across multiple machines, however, communication cost can prohibit the computation of PCA in a central location and distributed algorithms for PCA are thus needed. This paper proposes and studies a distributed PCA algorithm: each node machine computes the top $K$ eigenvectors and transmits them to the central server; the central server then aggregates the information from all the node machines and conducts a PCA based on the aggregated information. We investigate the bias and variance for the resulting distributed estimator of the top $K$ eigenvectors. In particular, we show that for distributions with symmetric innovation, the empirical top eigenspaces are unbiased, and hence the distributed PCA is “unbiased.” We derive the rate of convergence for distributed PCA estimators, which depends explicitly on the effective rank of covariance, eigengap, and the number of machines. We show that when the number of machines is not unreasonably large, the distributed PCA performs as well as the whole sample PCA, even without full access of whole data. The theoretical results are verified by an extensive simulation study. We also extend our analysis to the heterogeneous case where the population covariance matrices are different across local machines but share similar top eigenstructures.

Article information

Source
Ann. Statist., Volume 47, Number 6 (2019), 3009-3031.

Dates
Received: February 2017
Revised: January 2018
First available in Project Euclid: 31 October 2019

Permanent link to this document
https://projecteuclid.org/euclid.aos/1572487381

Digital Object Identifier
doi:10.1214/18-AOS1713

Mathematical Reviews number (MathSciNet)
MR4025733

Zentralblatt MATH identifier
07151052

Subjects
Primary: 62H25: Factor analysis and principal components; correspondence analysis
Secondary: 62E10: Characterization and structure theory

Keywords
Distributed learning PCA one-shot approach communication efficiency unbiasedness of empirical eigenspaces heterogeneity

Citation

Fan, Jianqing; Wang, Dong; Wang, Kaizheng; Zhu, Ziwei. Distributed estimation of principal eigenspaces. Ann. Statist. 47 (2019), no. 6, 3009--3031. doi:10.1214/18-AOS1713. https://projecteuclid.org/euclid.aos/1572487381


Export citation

References

  • Anderson, T. W. (1963). Asymptotic theory for principal component analysis. Ann. Math. Stat. 34 122–148.
  • Baik, J., Ben Arous, G. and Péché, S. (2005). Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. Ann. Probab. 33 1643–1697.
  • Battey, H., Fan, J., Liu, H., Lu, J. and Zhu, Z. (2015). Distributed estimation and inference with statistical guarantees. Preprint. Available at arXiv:1509.05457.
  • Bertrand, A. and Moonen, M. (2014). Distributed adaptive estimation of covariance matrix eigenvectors in wireless sensor networks with application to distributed PCA. Signal Process. 104 120–135.
  • Boutsidis, C., Woodruff, D. P. and Zhong, P. (2016). Optimal principal component analysis in distributed and streaming models. In STOC’16—Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing 236–249. ACM, New York.
  • Cai, T. T., Ma, Z. and Wu, Y. (2013). Sparse PCA: Optimal rates and adaptive estimation. Ann. Statist. 41 3074–3110.
  • Chen, X. and Xie, M. (2014). A split-and-conquer approach for analysis of extraordinarily large data. Statist. Sinica 24 1655–1684.
  • Chen, T.-L., Chang, D. D., Huang, S.-Y., Chen, H., Lin, C. and Wang, W. (2016). Integrating multiple random sketches for singular value decomposition. Preprint. Available at arXiv:1608.08285.
  • El Karoui, N. and d’Aspremont, A. (2010). Second order accurate distributed eigenvector computation for extremely large matrices. Electron. J. Stat. 4 1345–1385.
  • Fan, J., Wang, W. and Zhu, Z. (2016). Robust low-rank matrix recovery. Preprint. Available at arXiv:1603.08315.
  • Fan, J., Wang, D., Wang, K. and Zhu, Z. (2019). Supplement to “Distributed estimation of principal eigenspaces.” DOI:10.1214/18-AOS1713SUPP.
  • Garber, D., Shamir, O. and Srebro, N. (2017). Communication-efficient algorithms for distributed stochastic principal component analysis. Preprint. Available at arXiv:1702.08169.
  • Golub, G. H. and Van Loan, C. F. (2012). Matrix Computations, 3rd ed. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins Univ. Press, Baltimore, MD.
  • Guo, Z.-C., Lin, S.-B. and Zhou, D.-X. (2017). Learning theory of distributed spectral algorithms. Inverse Probl. 33 074009, 29.
  • Halko, N., Martinsson, P. G. and Tropp, J. A. (2011). Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53 217–288.
  • Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24 417–441.
  • Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Ann. Statist. 29 295–327.
  • Johnstone, I. M. and Lu, A. Y. (2009). On consistency and sparsity for principal components analysis in high dimensions. J. Amer. Statist. Assoc. 104 682–693.
  • Jung, S. and Marron, J. S. (2009). PCA consistency in high dimension, low sample size context. Ann. Statist. 37 4104–4130.
  • Kannan, R., Vempala, S. and Woodruff, D. (2014). Principal component analysis and higher correlations for distributed data. In Conference on Learning Theory 1040–1057.
  • Kargupta, H., Huang, W., Sivakumar, K. and Johnson, E. (2001). Distributed clustering using collective principal component analysis. Knowledge and Information Systems 3 422–448.
  • Kato, T. (1966). Perturbation Theory for Linear Operators. Die Grundlehren der Mathematischen Wissenschaften, Band 132. Springer, New York.
  • Kneip, A. and Utikal, K. J. (2001). Inference for density families using functional principal component analysis. J. Amer. Statist. Assoc. 96 519–542.
  • Koltchinskii, V. and Lounici, K. (2016). Asymptotics and concentration bounds for bilinear forms of spectral projectors of sample covariance. Ann. Inst. Henri Poincaré Probab. Stat. 52 1976–2013.
  • Koltchinskii, V. and Lounici, K. (2017). Concentration inequalities and moment bounds for sample covariance operators. Bernoulli 23 110–133.
  • Lee, J. D., Liu, Q., Sun, Y. and Taylor, J. E. (2017). Communication-efficient sparse regression. J. Mach. Learn. Res. 18 1–30.
  • Li, L., Scaglione, A. and Manton, J. H. (2011). Distributed principal subspace estimation in wireless sensor networks. IEEE J. Sel. Top. Signal Process. 5 725–738.
  • Liang, Y., Balcan, M.-F. F., Kanchanapally, V. and Woodruff, D. (2014). Improved distributed principal component analysis. In Advances in Neural Information Processing Systems 3113–3121.
  • Minsker, S. (2018). Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries. Ann. Statist. 46 2871–2903.
  • Mücke, N. and Blanchard, G. (2018). Parallelizing spectrally regularized kernel algorithms. J. Mach. Learn. Res. 19 Paper No. 30, 29.
  • Nadler, B. (2008). Finite sample approximation results for principal component analysis: A matrix perturbation approach. Ann. Statist. 36 2791–2817.
  • Onatski, A. (2012). Asymptotics of the principal components estimator of large factor models with weakly influential factors. J. Econometrics 168 244–258.
  • Paul, D. (2007). Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. Statist. Sinica 17 1617–1642.
  • Paul, D. and Johnstone, I. M. (2012). Augmented sparse principal component analysis for high dimensional data. Preprint. Available at arXiv:1202.1242.
  • Pearson, K. (1901). On lines and planes of closest fit to systems of point in space. Philosophical Magazine Series 6 2 559–572.
  • Qu, Y., Ostrouchov, G., Samatova, N. and Geist, A. (2002). Principal component analysis for dimension reduction in massive distributed data sets. In IEEE International Conference on Data Mining (ICDM).
  • Reiss, M. and Wahl, M. (2016). Non-asymptotic upper bounds for the reconstruction error of PCA. Preprint. Available at arXiv:1609.03779.
  • Schizas, I. D. and Aduroja, A. (2015). A distributed framework for dimensionality reduction and denoising. IEEE Trans. Signal Process. 63 6379–6394.
  • Shen, D., Shen, H. and Marron, J. S. (2013). Consistency of sparse PCA in high dimension, low sample size contexts. J. Multivariate Anal. 115 317–333.
  • Shen, D., Shen, H., Zhu, H. and Marron, J. S. (2016). The statistics and mathematics of high dimension low sample size asymptotics. Statist. Sinica 26 1747–1770.
  • Tropp, J. A., Yurtsever, A., Udell, M. and Cevher, V. (2016). Randomized single-view algorithms for low-rank matrix approximation. Preprint. Available at arXiv:1609.00048.
  • Vaccaro, R. J. (1994). A second-order perturbation expansion for the SVD. SIAM J. Matrix Anal. Appl. 15 661–671.
  • Vershynin, R. (2012). Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing 210–268. Cambridge Univ. Press, Cambridge.
  • Vu, V. Q. and Lei, J. (2013). Minimax sparse principal subspace estimation in high dimensions. Ann. Statist. 41 2905–2947.
  • Wang, R. (2015). Singular vector perturbation under Gaussian noise. SIAM J. Matrix Anal. Appl. 36 158–177.
  • Wang, W. and Fan, J. (2017). Asymptotics of empirical eigenstructure for high dimensional spiked covariance. Ann. Statist. 45 1342–1374.
  • Wei, X. and Minsker, S. (2017). Estimation of the covariance structure of heavy-tailed distributions. In Advances in Neural Information Processing Systems 2855–2864.
  • Xu, Z. (2002). Perturbation analysis for subspace decomposition with applications in subspace-based algorithms. IEEE Trans. Signal Process. 50 2820–2830.
  • Yu, Y., Wang, T. and Samworth, R. J. (2015). A useful variant of the Davis–Kahan theorem for statisticians. Biometrika 102 315–323.
  • Zhang, Y., Duchi, J. C. and Wainwright, M. J. (2013). Divide and conquer kernel ridge regression. In COLT 592–617.

Supplemental materials

  • Supplement to “Distributed estimation of principal eigenspaces”. Proofs of the results in the paper can be found in the Supplementary Material.