The Annals of Statistics

ROP: Matrix recovery via rank-one projections

T. Tony Cai and Anru Zhang

Full-text: Open access

Abstract

Estimation of low-rank matrices is of significant interest in a range of contemporary applications. In this paper, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically.

The techniques and main results developed in the paper also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections.

Article information

Source
Ann. Statist., Volume 43, Number 1 (2015), 102-138.

Dates
First available in Project Euclid: 18 November 2014

Permanent link to this document
https://projecteuclid.org/euclid.aos/1416322038

Digital Object Identifier
doi:10.1214/14-AOS1267

Mathematical Reviews number (MathSciNet)
MR3285602

Zentralblatt MATH identifier
1308.62120

Subjects
Primary: 62H12: Estimation
Secondary: 62H36 62C20: Minimax procedures

Keywords
Constrained nuclear norm minimization low-rank matrix recovery optimal rate of convergence rank-one projection restricted uniform boundedness spiked covariance matrix

Citation

Cai, T. Tony; Zhang, Anru. ROP: Matrix recovery via rank-one projections. Ann. Statist. 43 (2015), no. 1, 102--138. doi:10.1214/14-AOS1267. https://projecteuclid.org/euclid.aos/1416322038


Export citation

References

  • [1] Alquier, P., Butucea, C., Hebiri, M. and Meziani, K. (2013). Rank penalized estimation of a quantum system. Phys. Rev. A. 88 032133.
  • [2] Andrews, H. C. and Patterson, C. L. III (1976). Singular value decomposition (SVD) image coding. IEEE Trans. Commun. 24 425–432.
  • [3] Basri, R. and Jacobs, D. W. (2003). Lambertian reflectance and linear sub-spaces. IEEE Trans. Pattern Anal. Mach. Intell. 25 218–233.
  • [4] Birnbaum, A., Johnstone, I. M., Nadler, B. and Paul, D. (2013). Minimax bounds for sparse PCA with noisy high-dimensional data. Ann. Statist. 41 1055–1084.
  • [5] Cai, T. T., Ma, Z. and Wu, Y. (2013). Sparse PCA: Optimal rates and adaptive estimation. Ann. Statist. 41 3074–3110.
  • [6] Cai, T. T., Ma, Z. and Wu, Y. (2014). Optimal estimation and rank detection for sparse spiked covariance matrices. Probab. Theory Related Fields. To appear.
  • [7] Cai, T. T., Xu, G. and Zhang, J. (2009). On recovery of sparse signals via $\ell_1$ minimization. IEEE Trans. Inform. Theory 55 3388–3397.
  • [8] Cai, T. T. and Zhang, A. (2013). Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmon. Anal. 35 74–93.
  • [9] Cai, T. T. and Zhang, A. (2013). Compressed sensing and affine rank minimization under restricted isometry. IEEE Trans. Signal Process. 61 3279–3290.
  • [10] Cai, T. T. and Zhang, A. (2014). Sparse representation of a polytope and recovery in sparse signals and low-rank matrices. IEEE Trans. Inform. Theory 60 122–132.
  • [11] Cai, T. and Zhang, A. (2014). Supplement to “ROP: Matrix recovery via rank-one projections.” DOI:10.1214/14-AOS1267SUPP.
  • [12] Candès, E. J., Li, X., Ma, Y. and Wright, J. (2011). Robust principal component analysis? J. ACM 58 Art. 11, 37.
  • [13] Candès, E. J. and Plan, Y. (2011). Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inform. Theory 57 2342–2359.
  • [14] Candès, E. J. and Recht, B. (2009). Exact matrix completion via convex optimization. Found. Comput. Math. 9 717–772.
  • [15] Candès, E. J., Strohmer, T. and Voroninski, V. (2013). PhaseLift: Exact and stable signal recovery from magnitude measurements via convex programming. Comm. Pure Appl. Math. 66 1241–1274.
  • [16] Candès, E. J. and Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inform. Theory 56 2053–2080.
  • [17] Chen, Y., Chi, Y. and Goldsmith, A. (2013). Exact and stable covariance estimation from quadratic sampling via convex programming. Preprint. Available at arXiv:1310.0807.
  • [18] Dasarathy, G., Shah, P., Bhaskar, B. N. and Nowak, R. (2012). Covariance sketching. In 50th Annual Allerton Conference on Communication, Control, and Computing 1026–1033.
  • [19] Dasarathy, G., Shah, P., Bhaskar, B. N. and Nowak, R. (2013). Sketching sparse matrices. Preprint. Available at arXiv:1303.6544.
  • [20] Dvijotham, K. and Fazel, M. (2010). A nullspace analysis of the nuclear norm heuristic for rank minimization. In 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) 3586–3589.
  • [21] Fan, J., Fan, Y. and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. J. Econometrics 147 186–197.
  • [22] Grant, M. and Boyd, S. (2012). CVX: Matlab software for disciplined convex programming, version 2.0 beta. Available at http://cvxr.com/cvx.
  • [23] Grant, M. C. and Boyd, S. P. (2008). Graph implementations for nonsmooth convex programs. In Recent Advances in Learning and Control (a tribute to M. Vidyasagar) (V. Blondel et al., eds.). Lecture Notes in Control and Inform. Sci. 371 95–110. Springer, London.
  • [24] Gross, D., Liu, Y. K., Flammia, S. T., Becker, S. and Eisert, J. (2010). Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105 150401–150404.
  • [25] Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Ann. Statist. 29 295–327.
  • [26] Koltchinskii, V., Lounici, K. and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 39 2302–2329.
  • [27] Koren, Y., Bell, R. and Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer 42 30–37.
  • [28] Laurent, B. and Massart, P. (2000). Adaptive estimation of a quadratic functional by model selection. Ann. Statist. 28 1302–1338.
  • [29] Nadler, B. (2010). Nonparametric detection of signals by information theoretic criteria: Performance analysis and an improved estimator. IEEE Trans. Signal Process. 58 2746–2756.
  • [30] Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Ann. Statist. 39 1069–1097.
  • [31] Oymak, S. and Hassibi, B. (2010). New null space results and recovery thresholds for matrix rank minimization. Preprint. Available at arXiv:1011.6326.
  • [32] binproceedings Oymak, S., Mohan, K., Fazel, M. and Hassibi, B. (2011). A simplified approach to recovery conditions for low-rank matrices. In Proc. Intl. Sympo. Information Theory (ISIT) 2318–2322. IEEE, Piscataway, NJ.
  • [33] Patterson, N., Price, A. L. and Reich, D. (2006). Population structure and eigenanalysis. PLoS Genet. 2 e190.
  • [34] Price, A. L., Patterson, N. J., Plenge, R. M., Weinblatt, M. E., Shadick, N. A. and Reich, D. (2006). Principal components analysis corrects for stratification in genome-wide association studies. Nat. Genet. 38 904–909.
  • [35] Recht, B. (2011). A simpler approach to matrix completion. J. Mach. Learn. Res. 12 3413–3430.
  • [36] Recht, B., Fazel, M. and Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 471–501.
  • [37] Rohde, A. and Tsybakov, A. B. (2011). Estimation of high-dimensional low-rank matrices. Ann. Statist. 39 887–930.
  • [38] Trosset, M. W. (2000). Distance matrix completion by numerical optimization. Comput. Optim. Appl. 17 11–22.
  • [39] Vershynin, R. (2011). Spectral norm of products of random and deterministic matrices. Probab. Theory Related Fields 150 471–509.
  • [40] Wakin, M., Laska, J., Duarte, M., Baron, D., Sarvotham, S., Takhar, D., Kelly, K. and Baraniuk, R. (2006). An architecture for compressive imaging. In Proceedings of the International Conference on Image Processing (ICIP 2006) 1273–1276.
  • [41] Wang, H. and Li, S. (2013). The bounds of restricted isometry constants for low rank matrices recovery. Sci. China Ser. A 56 1117–1127.
  • [42] Wang, Y. (2013). Asymptotic equivalence of quantum state tomography and noisy matrix completion. Ann. Statist. 41 2462–2504.
  • [43] Wax, M. and Kailath, T. (1985). Detection of signals by information theoretic criteria. IEEE Trans. Acoust. Speech Signal Process. 33 387–392.

Supplemental materials

  • Supplementary material: Supplement to “ROP: Matrix recovery via rank-one projections”. We prove the technical lemmas used in the proofs of the main results in this supplement. The proofs rely on results in [7, 13, 28, 36, 39, 41] and [31].