Scalar Poincaré implies matrix Poincaré*

We prove that every reversible Markov semigroup which satisfies a Poincaré inequality satisfies a matrix-valued Poincaré inequality for Hermitian d× d matrix valued functions, with the same Poincaré constant. This generalizes recent results [ABY19, Kat20] establishing such inequalities for specific semigroups and consequently yields new matrix concentration inequalities. The short proof follows from the spectral theory of Markov semigroup generators.


Introduction
There is a long tradition in probability theory (see e.g. [GM83,Led99]) of using functional inequalities on a probability space (Ω, Σ, P) to derive concentration inequalities for nice (e.g. Lipschitz) functions f : Ω → R on that space. The most basic of these inequalities is the Poincaré inequality, which postulates that: (1.1) for an appropriately large class of f , where E(·, ·) is an appropriate Dirichlet form and α > 0 is the Poincaré constant.
Recently there has been growing interest in extending this phenomenon to matrixvalued functions [CH16, CHT17, CH19, ABY19, Kat20]. The last two of these works in particular (independently) studied the notion of matrix Poincaré inequality, in which (1.1) is required to hold for H d -valued f , E, and Var, with the inequality replaced by the Loëwner ordering on H d , the space of d × d Hermitian matrices. They showed that a matrix Poincaré inequality generically implies concentration bounds in the operator norm similar to those in the scalar case 1 . They then proceeded to prove matrix Poincaré inequalities for several interesting classes of measures (product [ABY19], Gaussian [ABY19], Strongly Rayleigh [ABY19, Kat20]) on a case by case basis, often mimicking the scalar proofs but requiring significant additional work to handle the noncommutativity of matrices.
In this note, we show that the second step above can also be made generic, and that matrix Poincaré inequalities follow automatically from their scalar counterparts in the full generality of arbitrary reversible Markov semigroups (see the excellent book [BGL13] for a detailed introduction).
Let L 2 (Ω, µ) be a separable complex Hilbert space, and let C d×d be the Hilbert space of complex d × d matrices with the Hilbert-Schmidt inner product. To state our theorem, we define a "matrix-valued inner product" ·, · d on the Hilbert space tensor product be a reversible Markov process on a probability space (Ω, Σ, P) with stationary measure µ and densely defined self-adjoint infinitesimal genera-  The proof of Theorem 1.1 relies on the spectral theorem for unbounded self-adjoint operators on a complex separable Hilbert space. The only property of Markov generators that is used is self-adjointness on an appropriate domain orthogonal to the constant function. Before presenting this proof in Section 3, we give an elementary linear algebraic proof of the finite-dimensional case in Section 2, which is already enough for several important applications (such as all finite reversible Markov chains and strongly log-concave measures) and avoids any analytic subtleties. Remark. Theorem 1.1 was recently independently observed in [HT20, Proposition 2.3], where it was credited to Ramon van Handel.
Here we prove Theorem 1.1 when Ω is finite with |Ω| = n. Let H = {f : Ω → C : E µ f = 0}. Let A : H → H be the operator A := −L. Consider an orthonormal eigenbasis g 1 , . . . , g n−1 of A (with respect to the inner product f, g = x∈Ω µ(x)f (x)g(x)). Let λ i be the eigenvalue corresponding to g i . By the assumption of Theorem 1.1, λ i ≥ 1/α for all i ∈ [n − 1]. Now consider any Notice that ·, · d is bilinear and f ⊗ M, g ⊗ N d = f, g M * N for f, g ∈ H and M, N ∈ C d×d . Applying these facts, Observe that for any f, g ∈ H ⊗ C d×d : Applying the spectral theorem for unbounded operators (e.g., [RS80, Theorem VIII.6]) and noting that by our assumption the spectrum of A ⊗ I C d is contained in [c, ∞), we obtain that for some projection valued measure {E λ } λ∈[c,∞) :