The Annals of Applied Probability

Improved bounds for sparse recovery from subsampled random convolutions

Shahar Mendelson, Holger Rauhut, and Rachel Ward

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

We study the recovery of sparse vectors from subsampled random convolutions via $\ell_{1}$-minimization. We consider the setup in which both the subsampling locations as well as the generating vector are chosen at random. For a sub-Gaussian generator with independent entries, we improve previously known estimates: if the sparsity $s$ is small enough, that is, $s\lesssim\sqrt{n/\log(n)}$, we show that $m\gtrsim s\log(en/s)$ measurements are sufficient to recover $s$-sparse vectors in dimension $n$ with high probability, matching the well-known condition for recovery from standard Gaussian measurements. If $s$ is larger, then essentially $m\geq s\log^{2}(s)\log(\log(s))\log(n)$ measurements are sufficient, again improving over previous estimates. Our results are shown via the so-called robust null space property which is weaker than the standard restricted isometry property. Our method of proof involves a novel combination of small ball estimates with chaining techniques which should be of independent interest.

Article information

Source
Ann. Appl. Probab., Volume 28, Number 6 (2018), 3491-3527.

Dates
Received: October 2016
Revised: March 2018
First available in Project Euclid: 8 October 2018

Permanent link to this document
https://projecteuclid.org/euclid.aoap/1538985627

Digital Object Identifier
doi:10.1214/18-AAP1391

Mathematical Reviews number (MathSciNet)
MR3861818

Zentralblatt MATH identifier
06994398

Subjects
Primary: 94A20: Sampling theory
Secondary: 60B20: Random matrices (probabilistic aspects; for algebraic aspects see 15B52)

Keywords
Circulant matrix compressive sensing generic chaining small ball estimates sparsity

Citation

Mendelson, Shahar; Rauhut, Holger; Ward, Rachel. Improved bounds for sparse recovery from subsampled random convolutions. Ann. Appl. Probab. 28 (2018), no. 6, 3491--3527. doi:10.1214/18-AAP1391. https://projecteuclid.org/euclid.aoap/1538985627


Export citation

References

  • [1] Ailon, N. and Liberty, E. (2009). Fast dimension reduction using Rademacher series on dual BCH codes. Discrete Comput. Geom. 42 615–630.
  • [2] Ailon, N. and Rauhut, H. (2014). Fast and RIP-optimal transforms. Discrete Comput. Geom. 52 780–798.
  • [3] Bajwa, W. U., Haupt, J. D., Raz, G. M., Wright, S. J. and Nowak, R. D. (2007). Toeplitz-structured compressed sensing matrices. In 2007 IEEE/SP 14th Workshop on Statistical Signal Processing 294–298. IEEE, New York.
  • [4] Baraniuk, R., Davenport, M., DeVore, R. and Wakin, M. (2008). A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28 253–263.
  • [5] Bourgain, J. (2014). An improved estimate in the restricted isometry problem. In Geometric Aspects of Functional Analysis. Lecture Notes in Math. 2116 65–70. Springer, Cham.
  • [6] Candès, E. J., Romberg, J. and Tao, T. (2006). Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory 52 489–509.
  • [7] Candes, E. J. and Tao, T. (2006). Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inform. Theory 52 5406–5425.
  • [8] Carl, B. (1985). Inequalities of Bernstein–Jackson-type and the degree of compactness of operators in Banach spaces. Ann. Inst. Fourier (Grenoble) 35 79–118.
  • [9] Chandrasekaran, V., Recht, B., Parrilo, P. A. and Willsky, A. S. (2012). The convex geometry of linear inverse problems. Found. Comput. Math. 12 805–849.
  • [10] Chen, S. S., Donoho, D. L. and Saunders, M. A. (1998). Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 33–61.
  • [11] de la Peña, V. H. and Giné, E. (1999). Decoupling. From Dependence to Independence, Randomly Stopped Processes. $U$-Statistics and Processes. Martingales and Beyond. Springer, New York.
  • [12] Dirksen, S., Jung, H. and Rauhut, H. (2017). One-bit compressed sensing with partial Gaussian circulant matrices. Preprint. Available at arXiv:1710.03287.
  • [13] Dirksen, S., Lecué, G. and Rauhut, H. (2016). On the gap between restricted isometry properties and sparse recovery conditions. IEEE Trans. Inform. Theory.
  • [14] Donoho, D. L. (2006). Compressed sensing. IEEE Trans. Inform. Theory 52 1289–1306.
  • [15] Donoho, D. L. and Tanner, J. (2009). Counting faces of randomly projected polytopes when the projection radically lowers dimension. J. Amer. Math. Soc. 22 1–53.
  • [16] Dorsch, D. and Rauhut, H. (2017). Refined analysis of sparse MIMO radar. J. Fourier Anal. Appl. 23 485–529.
  • [17] Foucart, S., Pajor, A., Rauhut, H. and Ullrich, T. (2010). The Gelfand widths of $\ell_{p}$-balls for $0<p\leq1$. J. Complexity 26 629–640.
  • [18] Foucart, S. and Rauhut, H. (2013). A Mathematical Introduction to Compressive Sensing. Birkhäuser, Basel.
  • [19] Haupt, J., Bajwa, W. U., Raz, G. and Nowak, R. (2010). Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Trans. Inform. Theory 56 5862–5875.
  • [20] Haviv, I. and Regev, O. (2016). The restricted isometry property of subsampled Fourier matrices. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms 288–297. ACM, New York.
  • [21] Hügel, M., Rauhut, H. and Strohmer, T. (2014). Remote sensing via $\ell_{1}$-minimization. Found. Comput. Math. 14 115–150.
  • [22] James, D. and Rauhut, H. (2015). Nonuniform sparse recovery with random convolutions. In Proc. SampTA 2015, Washington.
  • [23] Kabanava, M. and Rauhut, H. (2015). Analysis $\ell_{1}$-recovery with frames and Gaussian measurements. Acta Appl. Math. 140 173–195.
  • [24] Klartag, B. (2002). $5n$ Minkowski symmetrizations suffice to arrive at an approximate Euclidean ball. Ann. of Math. (2) 156 947–960.
  • [25] Krahmer, F., Mendelson, S. and Rauhut, H. (2014). Suprema of chaos processes and the restricted isometry property. Comm. Pure Appl. Math. 67 1877–1904.
  • [26] Krahmer, F. and Ward, R. (2011). New and improved Johnson–Lindenstrauss embeddings via the restricted isometry property. SIAM J. Math. Anal. 43 1269–1281.
  • [27] Lecué, G. and Mendelson, S. (2017). Sparse recovery under weak moment assumptions. J. Eur. Math. Soc. (JEMS) 19 881–904.
  • [28] Ledoux, M. and Talagrand, M. (1991). Probability in Banach Spaces: Isoperimetry and Processes. Ergebnisse der Mathematik und Ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)] 23. Springer, Berlin.
  • [29] Mendelson, S., Pajor, A. and Tomczak-Jaegermann, N. (2007). Reconstruction and subgaussian operators in asymptotic geometric analysis. Geom. Funct. Anal. 17 1248–1282.
  • [30] Mendelson, S., Pajor, A. and Tomczak-Jaegermann, N. (2008). Uniform uncertainty principle for Bernoulli and subgaussian ensembles. Constr. Approx. 28 277–289.
  • [31] Mendelson, S. and Vershynin, R. (2004). Remarks on the geometry of coordinate projections in $\mathbb{R}^{n}$. Israel J. Math. 140 203–220.
  • [32] Oliveira, R. I. (2013). The lower tail of random quadratic forms with applications to ordinary least squares and restricted eigenvalue properties. Preprint. Available at arXiv:1312.2903.
  • [33] Pfander, G. E. and Rauhut, H. (2010). Sparsity in time-frequency representations. J. Fourier Anal. Appl. 16 233–260.
  • [34] Pfander, G. E., Rauhut, H. and Tropp, J. A. (2013). The restricted isometry property for time-frequency structured random matrices. Probab. Theory Related Fields 156 707–737.
  • [35] Rauhut, H. (2009). Circulant and Toeplitz matrices in compressed sensing. In Proc. SPARS ’09.
  • [36] Rauhut, H. (2010). Compressive sensing and structured random matrices. In Theoretical Foundations and Numerical Methods for Sparse Recovery (M. Fornasier, ed.). Radon Ser. Comput. Appl. Math. 9 1–92. de Gruyter, Berlin.
  • [37] Rauhut, H., Romberg, J. and Tropp, J. A. (2012). Restricted isometries for partial random circulant matrices. Appl. Comput. Harmon. Anal. 32 242–254.
  • [38] Rauhut, H. and Ward, R. (2016). Interpolation via weighted $\ell_{1}$ minimization. Appl. Comput. Harmon. Anal. 40 321–351.
  • [39] Romberg, J. (2009). Compressive sensing by random convolution. SIAM J. Imaging Sci. 2 1098–1128.
  • [40] Rudelson, M. and Vershynin, R. (2008). On sparse reconstruction from Fourier and Gaussian measurements. Comm. Pure Appl. Math. 61 1025–1045.
  • [41] Rudelson, M. and Vershynin, R. (2015). Small ball probabilities for linear images of high-dimensional distributions. Int. Math. Res. Not. IMRN 2015 9594–9617.
  • [42] Talagrand, M. (2014). Upper and Lower Bounds for Stochastic Processes. Springer, Heidelberg.
  • [43] Tropp, J. A., Wakin, M. B., Duarte, M. F., Baron, D. and Baraniuk, R. G. (2006). Random filters for compressive sampling and reconstruction. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings 3 III–III. IEEE, New York.
  • [44] Vershynin, R. (2012). Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing 210–268. Cambridge Univ. Press, Cambridge.