• Bernoulli
  • Volume 26, Number 1 (2020), 387-417.

High dimensional deformed rectangular matrices with applications in matrix denoising

Xiucai Ding

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


We consider the recovery of a low rank $M\times N$ matrix $S$ from its noisy observation $\tilde{S}$ in the high dimensional framework when $M$ is comparable to $N$. We propose two efficient estimators for $S$ under two different regimes. Our analysis relies on the local asymptotics of the eigenstructure of large dimensional rectangular matrices with finite rank perturbation. We derive the convergent limits and rates for the singular values and vectors for such matrices.

Article information

Bernoulli, Volume 26, Number 1 (2020), 387-417.

Received: August 2017
Revised: November 2018
First available in Project Euclid: 26 November 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

matrix denoising random matrices rotation invariant estimation singular value decomposition


Ding, Xiucai. High dimensional deformed rectangular matrices with applications in matrix denoising. Bernoulli 26 (2020), no. 1, 387--417. doi:10.3150/19-BEJ1129.

Export citation


  • [1] Baik, J., Ben Arous, G. and Péché, S. (2005). Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices. Ann. Probab. 33 1643–1697.
  • [2] Benaych-Georges, F. and Nadakuditi, R.R. (2011). The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Adv. Math. 227 494–521.
  • [3] Bloemendal, A., Erdős, L., Knowles, A., Yau, H.-T. and Yin, J. (2014). Isotropic local laws for sample covariance and generalized Wigner matrices. Electron. J. Probab. 19 no. 33, 53.
  • [4] Bloemendal, A., Knowles, A., Yau, H.-T. and Yin, J. (2016). On the principal components of sample covariance matrices. Probab. Theory Related Fields 164 459–552.
  • [5] Bun, J., Allez, R., Bouchaud, J.-P. and Potters, M. (2016). Rotational invariant estimator for general noisy matrices. IEEE Trans. Inform. Theory 62 7475–7490.
  • [6] Bun, J., Bouchaud, J.-P. and Potters, M. (2017). Cleaning large correlation matrices: Tools from random matrix theory. Phys. Rep. 666 1–109.
  • [7] Ding, X. (2017). Asymptotics of empirical eigen-structure of general covariance matrices. Available at arXiv:1708.06296.
  • [8] Ding, X. (2020). Supplement to “High dimensional deformed rectangular matrices with applications in matrix denoising.”
  • [9] Ding, X. (2019). Singular vector distribution of sample covariance matrices. Adv. in Appl. Probab. 51 236–267.
  • [10] Ding, X. and Yang, F. (2018). A necessary and sufficient condition for edge universality at the largest singular values of covariance matrices. Ann. Appl. Probab. 28 1679–1738.
  • [11] Donoho, D.L. (1995). De-noising by soft-thresholding. IEEE Trans. Inform. Theory 41 613–627.
  • [12] Elad, M. (2010). Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. New York: Springer.
  • [13] Gavish, M. and Donoho, D.L. (2014). The optimal hard threshold for singular values is $4/\sqrt{3}$. IEEE Trans. Inform. Theory 60 5040–5053.
  • [14] Gavish, M. and Donoho, D.L. (2017). Optimal shrinkage of singular values. IEEE Trans. Inform. Theory 63 2137–2152.
  • [15] Golub, G.H. and Van Loan, C.F. (1996). Matrix Computations, 3rd ed. Johns Hopkins Studies in the Mathematical Sciences. Baltimore, MD: Johns Hopkins Univ. Press.
  • [16] James, G., Witten, D., Hastie, T. and Tibshirani, R. (2013). An Introduction to Statistical Learning. Springer Texts in Statistics 103. New York: Springer.
  • [17] Knowles, A. and Yin, J. (2013). The isotropic semicircle law and deformation of Wigner matrices. Comm. Pure Appl. Math. 66 1663–1750.
  • [18] Knowles, A. and Yin, J. (2014). The outliers of a deformed Wigner matrix. Ann. Probab. 42 1980–2031.
  • [19] Knowles, A. and Yin, J. (2017). Anisotropic local laws for random matrices. Probab. Theory Related Fields 169 257–352.
  • [20] Laloux, L., Cizeau, P., Potters, M. and Bouchaud, J. (2000). Random matrix theory and financial correlations. Int. J. Theor. Appl. Finance 3 391–397.
  • [21] Lam, C. and Yao, Q. (2012). Factor modeling for high-dimensional time series: Inference for the number of factors. Ann. Statist. 40 694–726.
  • [22] Ledoit, O. and Péché, S. (2011). Eigenvectors of some large sample covariance matrix ensembles. Probab. Theory Related Fields 151 233–264.
  • [23] Lee, M., Shen, H., Huang, J.Z. and Marron, J.S. (2010). Biclustering via sparse singular value decomposition. Biometrics 66 1087–1095.
  • [24] Marčenko, V.A. and Pastur, L.A. (1967). Distribution of eigenvalues for some sets of random matrices. Math. USSR, Sb. 1 457.
  • [25] Nadakuditi, R.R. (2014). OptShrink: An algorithm for improved low-rank signal matrix denoising by optimal, data-driven singular value shrinkage. IEEE Trans. Inform. Theory 60 3002–3018.
  • [26] Nadakuditi, R.R. and Edelman, A. (2008). Sample eigenvalue based detection of high-dimensional signals in white noise using relatively few samples. IEEE Trans. Signal Process. 56 2625–2638.
  • [27] Passemier, D. and Yao, J. (2014). Estimation of the number of spikes, possibly equal, in the high-dimensional case. J. Multivariate Anal. 127 173–183.
  • [28] Paul, D. (2007). Asymptotics of sample eigenstructure for a large dimensional spiked covariance model. Statist. Sinica 17 1617–1642.
  • [29] Pillai, N.S. and Yin, J. (2014). Universality of covariance matrices. Ann. Appl. Probab. 24 935–1001.
  • [30] Pizzo, A., Renfrew, D. and Soshnikov, A. (2013). On finite rank deformations of Wigner matrices. Ann. Inst. Henri Poincaré Probab. Stat. 49 64–94.
  • [31] Pontes, B., Giráldez, R. and Aguilar-Ruiz, J. (2015). Biclustering on expression data: A review. BMC Bioinform. 57 163–180.
  • [32] Renfrew, D. and Soshnikov, A. (2013). On finite rank deformations of Wigner matrices II: Delocalized perturbations. Random Matrices Theory Appl. 2 1250015, 36.
  • [33] Silverstein, J.W. (2009). The Stieltjes transform and its role in eigenvalue behavior of large dimensional random matrices. In Random Matrix Theory and Its Applications. Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap. 18 1–25. Hackensack, NJ: World Sci. Publ.
  • [34] Tao, T. (2012). Topics in Random Matrix Theory. Graduate Studies in Mathematics 132. Providence, RI: Amer. Math. Soc.
  • [35] Tracy, C.A. and Widom, H. (1996). On orthogonal and symplectic matrix ensembles. Comm. Math. Phys. 177 727–754.
  • [36] Tufts, D. and Shah, A. (1993). Estimation of a signal waveform from noisy data using low-rank approximation to a data matrix. IEEE Trans. Signal Process. 41 7475–7490.
  • [37] Yang, D., Ma, Z. and Buja, A. (2016). Rate optimal denoising of simultaneously sparse and low rank matrices. J. Mach. Learn. Res. 17 Paper No. 92, 27.

Supplemental materials

  • Supplement to “High dimensional deformed rectangular matrices with applications in matrix denoising”. This supplementary material contains auxiliary lemmas and proofs of Proposition 3.3, Theorems 3.4 and 3.5, Lemmas 4.4, 4.6, 4.8, 5.3, 5.5 and 5.6.