Translator Disclaimer
June 2019 Convex regularization for high-dimensional multiresponse tensor regression
Garvesh Raskutti, Ming Yuan, Han Chen
Ann. Statist. 47(3): 1554-1584 (June 2019). DOI: 10.1214/18-AOS1725


In this paper, we present a general convex optimization approach for solving high-dimensional multiple response tensor regression problems under low-dimensional structural assumptions. We consider using convex and weakly decomposable regularizers assuming that the underlying tensor lies in an unknown low-dimensional subspace. Within our framework, we derive general risk bounds of the resulting estimate under fairly general dependence structure among covariates. Our framework leads to upper bounds in terms of two very simple quantities, the Gaussian width of a convex set in tensor space and the intrinsic dimension of the low-dimensional tensor subspace. To the best of our knowledge, this is the first general framework that applies to multiple response problems. These general bounds provide useful upper bounds on rates of convergence for a number of fundamental statistical models of interest including multiresponse regression, vector autoregressive models, low-rank tensor models and pairwise interaction models. Moreover, in many of these settings we prove that the resulting estimates are minimax optimal. We also provide a numerical study that both validates our theoretical guarantees and demonstrates the breadth of our framework.


Download Citation

Garvesh Raskutti. Ming Yuan. Han Chen. "Convex regularization for high-dimensional multiresponse tensor regression." Ann. Statist. 47 (3) 1554 - 1584, June 2019.


Received: 1 April 2017; Revised: 1 May 2018; Published: June 2019
First available in Project Euclid: 13 February 2019

zbMATH: 07053518
MathSciNet: MR3911122
Digital Object Identifier: 10.1214/18-AOS1725

Primary: 60K35

Rights: Copyright © 2019 Institute of Mathematical Statistics


This article is only available to subscribers.
It is not available for individual sale.

Vol.47 • No. 3 • June 2019
Back to Top