Annals of Statistics

Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion

Vladimir Koltchinskii, Karim Lounici, and Alexandre B. Tsybakov

Full-text: Open access


This paper deals with the trace regression model where n entries or linear combinations of entries of an unknown m1 × m2 matrix A0 corrupted by noise are observed. We propose a new nuclear-norm penalized estimator of A0 and establish a general sharp oracle inequality for this estimator for arbitrary values of n, m1, m2 under the condition of isometry in expectation. Then this method is applied to the matrix completion problem. In this case, the estimator admits a simple explicit form, and we prove that it satisfies oracle inequalities with faster rates of convergence than in the previous works. They are valid, in particular, in the high-dimensional setting m1m2n. We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix A0, a nonminimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor. Finally, we show that our procedure provides an exact recovery of the rank of A0 with probability close to 1. We also discuss the statistical learning setting where there is no underlying model determined by A0, and the aim is to find the best trace regression model approximating the data. As a by-product, we show that, under the restricted eigenvalue condition, the usual vector Lasso estimator satisfies a sharp oracle inequality (i.e., an oracle inequality with leading constant 1).

Article information

Ann. Statist., Volume 39, Number 5 (2011), 2302-2329.

First available in Project Euclid: 30 November 2011

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62J99: None of the above, but in this section 62H12: Estimation
Secondary: 60B20: Random matrices (probabilistic aspects; for algebraic aspects see 15B52) 60G15: Gaussian processes

Matrix completion low-rank matrix estimation recovery of the rank statistical learning optimal rate of convergence noncommutative Bernstein inequality Lasso


Koltchinskii, Vladimir; Lounici, Karim; Tsybakov, Alexandre B. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 39 (2011), no. 5, 2302--2329. doi:10.1214/11-AOS894.

Export citation


  • [1] Ahlswede, R. and Winter, A. (2002). Strong converse for identification via quantum channels. IEEE Trans. Inform. Theory 48 569–579.
  • [2] Argyriou, A., Evgeniou, T. and Pontil, M. (2008). Convex multi-task feature learning. Machine Learning 73 243–272.
  • [3] Argyriou, A., Micchelli, C. A. and Pontil, M. (2010). On spectral learning. J. Mach. Learn. Res. 11 935–953.
  • [4] Argyriou, A., Micchelli, C. A., Pontil, M. and Ying, Y. (2008). A spectral regularization framework for multi-task structure learning. In Advances in Neural Information Processing Systems (J. C. Platt, D. Koller, Y. Singer and S. Roweis, eds.) 20 25–32. MIT Press, Cambridge, MA.
  • [5] Aubin, J.-P. and Ekeland, I. (1984). Applied Nonlinear Analysis. Wiley, New York.
  • [6] Bach, F. R. (2008). Consistency of trace norm minimization. J. Mach. Learn. Res. 9 1019–1048.
  • [7] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of lasso and Dantzig selector. Ann. Statist. 37 1705–1732.
  • [8] Bunea, F., She, Y. and Wegkamp, M. H. (2011). Optimal selection of reduced rank estimators of high-dimensional matrices. Ann. Statist. 39 1282–1309.
  • [9] Candès, E. J. and Plan, Y. (2010). Matrix completion with noise. Proc. IEEE 98 925–936.
  • [10] Candès, E. J. and Plan, Y. (2010). Tight oracle bounds for low-rank matrix recovery from a mininal number of noisy random measurements. Available at arXiv:1001.0339.
  • [11] Candès, E. J. and Recht, B. (2009). Exact matrix completion via convex optimization. Found. Comput. Math. 9 717–772.
  • [12] Candès, E. J. and Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inform. Theory 56 2053–2080.
  • [13] Gaiffas, S. and Lecué, G. (2010). Sharp oracle inequalities for the prediction of a high-dimensional matrix. Available at arXiv:1008.4886.
  • [14] Giraud, C. (2011). Low rank multivariate regression. Electron. J. Stat. 5 775–799.
  • [15] Gross, D. (2011). Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57 1548–1566.
  • [16] Keshavan, R. H., Montanari, A. and Oh, S. (2010). Matrix completion from noisy entries. J. Mach. Learn. Res. 11 2057–2078.
  • [17] Koltchinskii, V. (2009). The Dantzig selector and sparsity oracle inequalities. Bernoulli 15 799–828.
  • [18] Koltchinskii, V. (2010). von Neumann entropy penalization and low rank matrix approximation. Available at arXiv:1009.2439.
  • [19] Negahban, S., Ravikumar, P., Wainwright, M. J. and Yu, B. (2010). A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Available at arXiv:1010.2731.
  • [20] Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low rank matrices with noise and high-dimensional scaling. Ann. Statist. 39 1069–1097.
  • [21] Negahban, S. and Wainwright, M. J. (2010). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Available at arXiv:1009.2118.
  • [22] Recht, B. (2009). A simpler approach to matrix completion. Available at arXiv:0910.0651.
  • [23] Recht, B., Fazel, M. and Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 471–501.
  • [24] Rohde, A. and Tsybakov, A. (2011). Estimation of high-dimensional low rank matrices. Ann. Statist. 39 887–930.
  • [25] Stewart, G. W. and Sun, J. G. (1990). Matrix Perturbation Theory. Academic Press, Boston, MA.
  • [26] Tropp, J. A. (2010). User-friendly tail bounds for sums of random matrices. Available at arXiv:1004.4389.
  • [27] Tsybakov, A. B. (2009). Introduction to Nonparametric Estimation. Springer, New York.
  • [28] Watson, G. A. (1992). Characterization of the subdifferential of some matrix norms. Linear Algebra Appl. 170 33–45.