The Annals of Statistics

Cross: Efficient low-rank tensor completion

Anru Zhang

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


The completion of tensors, or high-order arrays, attracts significant attention in recent research. Current literature on tensor completion primarily focuses on recovery from a set of uniformly randomly measured entries, and the required number of measurements to achieve recovery is not guaranteed to be optimal. In addition, the implementation of some previous methods are NP-hard. In this article, we propose a framework for low-rank tensor completion via a novel tensor measurement scheme that we name Cross. The proposed procedure is efficient and easy to implement. In particular, we show that a third-order tensor of Tucker rank-$(r_{1},r_{2},r_{3})$ in $p_{1}$-by-$p_{2}$-by-$p_{3}$ dimensional space can be recovered from as few as $r_{1}r_{2}r_{3}+r_{1}(p_{1}-r_{1})+r_{2}(p_{2}-r_{2})+r_{3}(p_{3}-r_{3})$ noiseless measurements, which matches the sample complexity lower bound. In the case of noisy measurements, we also develop a theoretical upper bound and the matching minimax lower bound for recovery error over certain classes of low-rank tensors for the proposed procedure. The results can be further extended to fourth or higher-order tensors. Simulation studies show that the method performs well under a variety of settings. Finally, the procedure is illustrated through a real dataset in neuroimaging.

Article information

Ann. Statist., Volume 47, Number 2 (2019), 936-964.

Received: November 2016
Revised: November 2017
First available in Project Euclid: 11 January 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62H12: Estimation
Secondary: 62C20: Minimax procedures

Cross tensor measurement denoising minimax rate-optimal neuroimaging tensor completion


Zhang, Anru. Cross: Efficient low-rank tensor completion. Ann. Statist. 47 (2019), no. 2, 936--964. doi:10.1214/18-AOS1694.

Export citation


  • Agarwal, A., Negahban, S. and Wainwright, M. J. (2012). Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Ann. Statist. 40 1171–1197.
  • Barak, B. and Moitra, A. (2016). Noisy tensor completion via the sum-of-squares hierarchy. In 29th Annual Conference on Learning Theory 417–445.
  • Bhojanapalli, S. and Sanghavi, S. (2015). A new sampling technique for tensors. Preprint. Available at arXiv:1502.05023.
  • Cai, T., Cai, T. T. and Zhang, A. (2016). Structured matrix completion with applications to genomic data integration. J. Amer. Statist. Assoc. 111 621–633.
  • Cai, T. T. and Zhou, W.-X. (2016). Matrix completion via max-norm constrained optimization. Electron. J. Stat. 10 1493–1525.
  • Caiafa, C. F. and Cichocki, A. (2010). Generalizing the column-row matrix decomposition to multi-way arrays. Linear Algebra Appl. 433 557–573.
  • Caiafa, C. F. and Cichocki, A. (2015). Stable, robust, and super fast reconstruction of tensors using multi-way projections. IEEE Trans. Signal Process. 63 780–793.
  • Candès, E. J. and Plan, Y. (2011). Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inform. Theory 57 2342–2359.
  • Candès, E. J. and Tao, T. (2010). The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inform. Theory 56 2053–2080.
  • Cao, Y. and Xie, Y. (2016). Poisson matrix recovery and completion. IEEE Trans. Signal Process. 64 1609–1620.
  • Cao, Y., Zhang, A. and Li, H. (2017). Multi-sample Estimation of Bacterial Composition Matrix in Metagenomics Data. Preprint. Available at arXiv:1706.02380.
  • Gandy, S., Recht, B. and Yamada, I. (2011). Tensor completion and low-$n$-rank tensor recovery via convex optimization. Inverse Probl. 27 025010.
  • Guhaniyogi, R., Qamar, S. and Dunson, D. B. (2017). Bayesian tensor regression. J. Mach. Learn. Res. 18 79.
  • Hillar, C. J. and Lim, L.-H. (2013). Most tensor problems are NP-hard. J. ACM 60 45.
  • Jain, P. and Oh, S. (2014). Provable tensor factorization with missing data. In Advances in Neural Information Processing Systems 1 1431–1439. MIT Press, Cambridge, MA.
  • Jiang, X., Raskutti, G. and Willett, R. (2015). Minimax optimal rates for Poisson inverse problems with physical constraints. IEEE Trans. Inform. Theory 61 4458–4474.
  • Johndrow, J. E., Bhattacharya, A. and Dunson, D. B. (2017). Tensor decompositions and sparse log-linear models. Ann. Statist. 45 1–38.
  • Karatzoglou, A., Amatriain, X., Baltrunas, L. and Oliver, N. (2010). Multiverse recommendation: N-dimensional tensor factorization for context-aware collaborative filtering. In Proceedings of the Fourth ACM Conference on Recommender Systems 79–86. ACM, New York.
  • Keshavan, R. H., Montanari, A. and Oh, S. (2010). Matrix completion from a few entries. IEEE Trans. Inform. Theory 56 2980–2998.
  • Klopp, O. (2014). Noisy low-rank matrix completion with general sampling distribution. Bernoulli 20 282–303.
  • Kolda, T. G. and Bader, B. W. (2009). Tensor decompositions and applications. SIAM Rev. 51 455–500.
  • Koltchinskii, V., Lounici, K. and Tsybakov, A. B. (2011). Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist. 39 2302–2329.
  • Kressner, D., Steinlechner, M. and Vandereycken, B. (2014). Low-rank tensor completion by Riemannian optimization. BIT 54 447–468.
  • Krishnamurthy, A. and Singh, A. (2013). Low-rank matrix and tensor completion via adaptive sampling. In Advances in Neural Information Processing Systems 836–844.
  • Li, N. and Li, B. (2010). Tensor completion for on-board compression of hyperspectral images. In 2010 IEEE International Conference on Image Processing 517–520. IEEE, New York.
  • Li, L. and Zhang, X. (2017). Parsimonious tensor response regression. J. Amer. Statist. Assoc. 112 1131–1146.
  • Li, X., Zhou, H. and Li, L. (2013). Tucker tensor regression and neuroimaging analysis. Preprint. Available at arXiv:1304.5637.
  • Li, L., Chen, Z., Wang, G., Chu, J. and Gao, H. (2014). A tensor PRISM algorithm for multi-energy CT reconstruction and comparative studies. J. X-Ray Sci. Technol. 22 147–163.
  • Liu, J., Musialski, P., Wonka, P. and Ye, J. (2013). Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35 208–220.
  • Mahoney, M. W., Maggioni, M. and Drineas, P. (2008). Tensor-CUR decompositions for tensor-based data. SIAM J. Matrix Anal. Appl. 30 957–987.
  • Mu, C., Huang, B., Wright, J. and Goldfarb, D. (2014). Square deal: Lower bounds and improved relaxations for tensor recovery. In ICML 73–81.
  • Negahban, S. and Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Ann. Statist. 39 1069–1097.
  • Negahban, S. and Wainwright, M. J. (2012). Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. J. Mach. Learn. Res. 13 1665–1697.
  • Nowak, R. D. and Kolaczyk, E. D. (2000). A statistical multiscale framework for Poisson inverse problems. IEEE Trans. Inform. Theory 46 1811–1825.
  • Oseledets, I. V., Savostianov, D. V. and Tyrtyshnikov, E. E. (2008). Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J. Matrix Anal. Appl. 30 939–956.
  • Oseledets, I. V. and Tyrtyshnikov, E. E. (2009). Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31 3744–3759.
  • Pimentel-Alarcón, D. L., Boston, N. and Nowak, R. D. (2016). A characterization of deterministic sampling patterns for low-rank matrix completion. IEEE J. Sel. Top. Signal Process. 10 623–636.
  • Raskutti, G., Yuan, M. and Chen, H. (2017). Convex regularization for high-dimensional multi-response tensor regression. Preprint. Available at arXiv:1512.01215v2.
  • Rauhut, H., Schneider, R. and Stojanac, Ž. (2017). Low rank tensor recovery via iterative hard thresholding. Linear Algebra Appl. 523 220–262.
  • Recht, B. (2011). A simpler approach to matrix completion. J. Mach. Learn. Res. 12 3413–3430.
  • Rendle, S. and Schmidt-Thieme, L. (2010). Pairwise interaction tensor factorization for personalized tag recommendation. In Proceedings of the Third ACM International Conference on Web Search and Data Mining 81–90. ACM, New York.
  • Richard, E. and Montanari, A. (2014). A statistical model for tensor PCA. In Advances in Neural Information Processing Systems 2897–2905.
  • Rohde, A. and Tsybakov, A. B. (2011). Estimation of high-dimensional low-rank matrices. Ann. Statist. 39 887–930.
  • Rudelson, M. and Vershynin, R. (2007). Sampling from large matrices: An approach through geometric functional analysis. J. ACM 54 21.
  • Semerci, O., Hao, N., Kilmer, M. E. and Miller, E. L. (2014). Tensor-based formulation and nuclear norm regularization for multienergy computed tomography. IEEE Trans. Image Process. 23 1678–1693.
  • Shah, P., Rao, N. and Tang, G. (2015). Optimal low-rank tensor recovery from separable measurements: Four contractions suffice. Preprint. Available at arXiv:1505.04085.
  • Srebro, N. and Shraibman, A. (2005). Rank, trace-norm and max-norm. In Learning Theory. Lecture Notes in Computer Science 3559 545–560. Springer, Berlin.
  • Sun, W. W. and Li, L. (2016). Sparse low-rank tensor response regression. Preprint. Available at arXiv:1609.04523.
  • Sun, W. W., Lu, J., Liu, H. and Cheng, G. (2017). Provable sparse tensor decomposition. J. R. Stat. Soc. Ser. B. Stat. Methodol. 79 899–916.
  • Tucker, L. R. (1966). Some mathematical notes on three-mode factor analysis. Psychometrika 31 279–311.
  • Wagner, A. and Zuk, O. (2015). Low-rank matrix recovery from row-and-column affine measurements. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15) 2012–2020.
  • Wang, Y. and Singh, A. (2015). Provably correct algorithms for matrix column subset selection with selectively sampled data. Preprint. Available at arXiv:1505.04343.
  • Wetzstein, G., Lanman, D., Hirsch, M. and Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting.
  • Yuan, M. and Zhang, C.-H. (2016). On tensor completion via nuclear norm minimization. Found. Comput. Math. 16 1031–1068.
  • Yuan, M. and Zhang, C.-H. (2017). Incoherent tensor norms and their applications in higher order tensor completion. IEEE Trans. Inform. Theory 63 6753–6766.
  • Zhang, A. (2019). Supplement to “Cross: Efficient low-rank tensor completion.” DOI:10.1214/18-AOS1694SUPP.
  • Zhang, A. and Xia, D. (2017). Tensor SVD: Statistical and computational limits. IEEE Trans. Inform. Theory 64 7311–7338.
  • Zhou, H., Li, L. and Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. J. Amer. Statist. Assoc. 108 540–552.

Supplemental materials

  • Supplement to “Cross: Efficient low-rank tensor completion”. In the supplement, we provide proofs for the main results and technical lemmas. For better presentation for the long proof of Theorem 2, we also provide a table of used notation.