The Annals of Statistics

Robust low-rank matrix estimation

Andreas Elsener and Sara van de Geer

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

Many results have been proved for various nuclear norm penalized estimators of the uniform sampling matrix completion problem. However, most of these estimators are not robust: in most of the cases the quadratic loss function and its modifications are used. We consider robust nuclear norm penalized estimators using two well-known robust loss functions: the absolute value loss and the Huber loss. Under several conditions on the sparsity of the problem (i.e., the rank of the parameter matrix) and on the regularity of the risk function sharp and nonsharp oracle inequalities for these estimators are shown to hold with high probability. As a consequence, the asymptotic behavior of the estimators is derived. Similar error bounds are obtained under the assumption of weak sparsity, that is, the case where the matrix is assumed to be only approximately low-rank. In all of our results, we consider a high-dimensional setting. In this case, this means that we assume $n\leq pq$. Finally, various simulations confirm our theoretical results.

Article information

Source
Ann. Statist., Volume 46, Number 6B (2018), 3481-3509.

Dates
Received: May 2016
Revised: October 2017
First available in Project Euclid: 11 September 2018

Permanent link to this document
https://projecteuclid.org/euclid.aos/1536631281

Digital Object Identifier
doi:10.1214/17-AOS1666

Mathematical Reviews number (MathSciNet)
MR3852659

Zentralblatt MATH identifier
06965695

Subjects
Primary: 62J05: Linear regression 62F30: Inference under constraints
Secondary: 62H12: Estimation

Keywords
Matrix completion robustness empirical risk minimization oracle inequality nuclear norm sparsity

Citation

Elsener, Andreas; van de Geer, Sara. Robust low-rank matrix estimation. Ann. Statist. 46 (2018), no. 6B, 3481--3509. doi:10.1214/17-AOS1666. https://projecteuclid.org/euclid.aos/1536631281


Export citation

References

  • Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, Heidelberg.
  • Cambier, L. and Absil, P.-A. (2016). Robust low-rank matrix completion by Riemannian optimization. SIAM J. Sci. Comput. 38 S440–S460.
  • Candès, E. J. and Plan, Y. (2010). Matrix completion with noise. Proc. IEEE 98 925–936.
  • Candès, E. J., Li, X., Ma, Y. and Wright, J. (2011). Robust principal component analysis? J. ACM 58 11.
  • Chandrasekaran, V., Sanghavi, S., Parrilo, P. A. and Willsky, A. S. (2011). Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21 572–596.
  • Chen, Y., Jalali, A., Sanghavi, S. and Caramanis, C. (2013). Low-rank matrix recovery from errors and erasures. IEEE Trans. Inform. Theory 59 4324–4337.
  • Cherapanamjeri, Y., Gupta, K. and Jain, P. (2016). Nearly-optimal robust matrix completion. Preprint. Available at arXiv:1606.07315.
  • CVX Research Inc. (2012). CVX: Matlab Software for Disciplined Convex Programming, version 2.0. Available at http://cvxr.com/cvx.
  • Elsener, A. and van de Geer, S. (2018). Supplement to “Robust low-rank matrix estimation.” DOI:10.1214/17-AOS1666SUPP.
  • Foygel, R., Shamir, O., Srebro, N. and Salakhutdinov, R. R. (2011). Learning with the weighted trace-norm under arbitrary sampling distributions. Adv. Neural Inf. Process. Syst. 2133–2141.
  • Klopp, O. (2014). Noisy low-rank matrix completion with general sampling distribution. Bernoulli 282–303.
  • Klopp, O., Lounici, K. and Tsybakov, A. B. (2016). Robust matrix completion. Probab. Theory Related Fields 1–42.
  • Koltchinskii, V., Lounici, K. and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 39 2302–2329.
  • Lafond, J. (2015). Low rank matrix completion with exponential family noise. J. Mach. Learn. Res.: Workshop and Conference Proceedings. COLT 2015 Proceedings 40 1–20.
  • Li, X. (2013). Compressed sensing and matrix completion with constant proportion of corruptions. Constr. Approx. 37 73–99.
  • Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Ann. Statist. 39 1069–1097.
  • Negahban, S. and Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. J. Mach. Learn. Res. 13 1665–1697.
  • Rohde, A. and Tsybakov, A. B. (2011). Estimation of high-dimensional low-rank matrices. Ann. Statist. 39 887–930.
  • Srebro, N., Rennie, J. and Jaakkola, T. S. (2004). Maximum-margin matrix factorization. In Proceedings of the NIPS Conference 1329–1336. Vancouver.
  • Srebro, N. and Shraibman, A. (2005). Rank, trace-norm and max-norm. In Learning Theory 545–560. Springer, Berlin.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 267–288.
  • van de Geer, S. (2001). Least squares estimation with complexity penalties. Math. Methods Statist. 10 355–374.
  • van de Geer, S. (2016). Estimation and Testing Under Sparsity: École d’Été de Probabilités de Saint-Flour XLV-2015. Springer, Berlin.

Supplemental materials

  • Supplement to “Robust low-rank matrix estimation”. The supplemental material contains an application to real data sets, the proofs of the lemmas in Section 2 and a section on the bound of the empirical process part of the estimation problem.