Analytic aspects of the dilation inequality for symmetric convex sets in Euclidean spaces

We discuss an analytic form of the dilation inequality for symmetric convex sets in Euclidean spaces, which is a counterpart of analytic aspects of Cheeger's isoperimetric inequality. We show that the dilation inequality for symmetric convex sets is equivalent to a certain bound of the relative entropy for symmetric quasi-convex functions, which is close to the logarithmic Sobolev inequality or Cram\'{e}r--Rao inequality. As corollaries, we investigate the reverse Shannon inequality, logarithmic Sobolev inequality, Kahane--Khintchine inequality, deviation inequality and isoperimetry. We also give new probability measures satisfying the dilation inequality for symmetric convex sets via bounded perturbations and tensorization.


Introduction
Cheeger's isoperimetric inequality with respect to a probability measure µ on R n is one of the most important geometric inequalities in geometry and geometric analysis.Cheeger [13] and Maz'ya [26,27] showed that Cheeger's isoperimetric inequality gives the spectral gap of the Laplace-Beltrami operator induced by µ.Conversely, Buser [12] (see also Ledoux [24]) also proved that the spectral gap, or equivalently the Poincaré inequality, gives Cheeger's isoperimetric inequality.Hence we can naturally regard the Poincaré inequality as an analytic form of Cheeger's isoperimetric inequality.Moreover, Bobkov-Houdré [7] also gave an equivalence between Cheeger's isoperimetric inequality and the (1, 1)-Poincaré inequality, and thus the (1, 1)-Poincaré inequality is another analytic aspect of Cheeger's isoperimetric inequality.
Our main goal in this paper is to investigate an analytic aspect of the dilation inequality (1.4) as the Poincaré inequality and (1, 1)-Poincaré inequality are analytic forms of Cheeger's isoperimetric inequality.To this end, however the definition of the dilation (1.1) is complicated, and thus as the first step, we focus only on symmetric open convex sets K ⊂ R n (we say that µ(K), t ≥ 1 for any log-concave probability measure µ on R n and symmetric convex set K ⊂ R n .In particular, it follows from Borell's lemma that whenever µ(K) ≥ 2/3, where c, C > 0 are absolute constants.This inequality seems concentration of measure with respect to dilations, and we can observe the same inequality from (1.4) for log-concave probability measures (see [32,Theorem 4.1]).By using (1.7), various geometric and analytic inequalities are induced like the Kahane-Khintchine inequality in [9,17] (see also [29]) and Cheeger's isoperimetric inequality in [3].We can see other applications of (1.7) in [11].
To describe our results in this paper, we introduce some notions.Let Ω ⊂ R n be a symmetric convex domain, and let K n s (Ω) be the set of all nonempty, symmetric open convex sets in R n .
Definition 1.1.A probability measure µ supported on a symmetric convex domain Ω ⊂ R n satisfies the dilation inequality for K n s (Ω) with κ > 0 if We may replace K n s (Ω) by K n s (R n ) in (1.8) since by the definition (1.1), it holds that (K∩Ω) ε ⊂ K ε for any K ∈ K n s (R n ) and ε ∈ (0, 1), and thus µ * (K ∩ Ω) ≤ µ * (K) by µ(K) = µ(K ∩ Ω), associated with K ∩ Ω ∈ K n s (Ω).We also remark that µ may not be symmetric even if its support is symmetric.As we have already mentioned, all log-concave probability measures on Ω (and thus on R n ) satisfy the dilation inequality for K n s (Ω) with κ = 1.In particular, important examples are symmetric log-concave probability measures on R and the standard Gaussian measure dγ n := (2π) −n/2 e −|x| 2 /2 dx on R n .We can observe that these measures satisfy (1.8) with κ = 2 (see Appendix).
Next, we introduce the relative entropy.For a nonnegative Borel function f and a probability measure µ on Ω with Ω f dµ < +∞, we define the relative entropy of f with respect to µ by where we put 0 log 0 := 0. Jensen's inequality implies that the relative entropy is nonnegative, and is 0 if and only if f is constant µ-a.e., on Ω.
The following functional inequalities, which are special cases we will show in Theorem 2.4, follow from (1.8).
Theorem 1.2.Let µ be a probability measure supported on a symmetric convex domain Ω and let f : Ω → [0, ∞) be a continuous and symmetric function with f ∈ L 1 (µ).We assume that µ satisfies the dilation inequality for K n s (Ω) for some κ > 0. ( x, y dµ(x), (1.9) where ∂f (x) is the subdifferential of f at x ∈ Ω. ( (1.10) In particular, when f is locally Lipschitz and quasi-convex on Ω, we have In addition, when f is C 1 on Ω, then we have (1.12) Here we say that a function f : < λ} is a convex set for any λ ∈ R. In particular, quasi-convexity is a generalization of convexity.An important example is | • | p for p > 0, which is continuous and symmetric quasi-convex on R n and locally Lipschitz on R n \ {0}.In addition, | • | p is locally Lipschitz and convex on R n when p ≥ 1. See Section 2 for more details and other examples of quasi-convex functions.
We emphasize that Theorem 1.2 is the special case of Theorem 2.4 where we will show a more general inequality for functions in a more wider class.Moreover, we will actually confirm that Theorem 2.4 can recover the dilation inequality (1.8) in Theorem 4.1.In this sense, our theorem gives the optimal estimate.
As the first application of Theorem 1.2, we obtain the reverse Shannon inequality.
Then it holds that The classical Shannon inequality (for instance, see [14]) implies the lower bound of the Shannon entropy such that for any nonnegative function h on R n with R n h dx = 1 and R n |x| 2 h(x) dx < +∞.On the other hand, (1.14) gives the upper bound of the Shannon entropy.We remark that as we will see in Subsection 3.1, we can ensure R n |x| 2 h(x) dx ≥ n in our settings, and thus it always holds that In addition, we can check that when h = γ n , then equality in (1.14) holds.
As another application of Theorem 1.2, we can observe the logarithmic Sobolev type or Cramér-Rao type inequality in the special case, which will be investigated in Subsection 3.2.In general, we say that a probability measure µ satisfies the logarithmic Sobolev inequality with for any nonnegative locally Lipschitz function f on R n , where I µ (f ) is the Fisher information of f with respect to µ given by for some ρ > 0, then µ satisfies the logarithmic Sobolev inequality with ρ.However, when ∇ 2 ϕ ≥ 0 (which means that ϕ is convex), µ may not satisfy (1.15) for any ρ > 0. Indeed, if µ satisfies the logarithmic Sobolev inequality, µ should satisfy the normal concentration, or equivalently R n e ε|x| 2 dµ(x) < +∞ for some ε > 0. In particular, since ∇ 2 ϕ ≥ 0 is equivalent to the log-concavity of µ by [10], we can observe that a log-concave probability measure may not satisfy (1.15) for any ρ > 0 in general.We refer a reader to [2] for details of the logarithmic Sobolev inequality.Nevertheless, we obtain the relation between the relative entropy and the Fisher information from Theorem 1.2 by the Cauchy-Schwarz inequality immediately.
Proposition 1.4.Let µ, Ω and f be as in Theorem 1.2.If f is a locally Lipschitz and quasiconvex function on Ω, then it holds We note that (1.16) is also close to the Cramér-Rao inequality.The classical Cramér-Rao inequality (or the Heisenberg-Pauli-Weyl inequality) implies that for any nonnegative locally Lipschitz function h with R n h dx = 1 and (1.17) Our result (1.16) does not induce (1.17) since the relative entropy can take the value 0, and in this sense (1.16) is different from the uncertainty principle.However, this difference is natural since on one hand, we cannot take any constant function in (1.17), on the other hand, we can take one in (1.16) due to the finite mass of µ.Nevertheless, the behavior of the relative entropy is closely related to the dimension.In fact, given probability measures µ 1 , µ 2 and nonnegative functions where This implies that the relative entropy can be linear increasing in the dimension n, and in this sense, the bound (1.16) is similar to (1.17).As we will see in Subsection 3.3, we will also discuss Kahane-Khintchine inequalities with positive and negative exponents for symmetric quasi-convex functions via Theorem 1.2 (and Theorem 2.4 and Proposition 2.3), and discuss deviation inequalities as their application.Similar inequalities for general functions have been already investigated in [30,5,15,32] where we need to assume the Remez type inequality.On the other hand, we can obtain Kahane-Khintchine inequalities and deviation inequalities without the Remez type inequality.We enumerate our results only on deviation inequalities in special cases.
(1) Let f be a positive C 1 symmetric quasi-convex function on Ω satisfying where C > 0 is an absolute constant.
(2) Suppose that Ω is bounded, and let f be a positive C 1 symmetric quasi-convex function on some neighborhood of Ω and set Suppose that 0 < β < +∞ with f −1/β ∈ L 1 (µ).Then for any small enough ε > 0, it holds that where med(f ) ∈ R is the Lévy mean of f , which means that As the final application, we can also obtain the following result on isoperimetry.
Corollary 1.6.Let µ = e −ϕ(x) dx be a probability measure supported on a symmetric convex domain Ω ⊂ R n .Suppose that ϕ is smooth on some neighborhood of Ω and µ satisfies the dilation inequality for K n s (Ω) with κ > 0. Then for any bounded K ∈ K n s (Ω) with smooth boundary and p ∈ (1, 2], we have where p ′ is the conjugate of p, r(K) is the maximal constant c > 0 such that cB n 2 ⊂ K, η is the outer unit normal vector along ∂K and σ K is the surface measure on ∂K.
Corollary 1.6 reminds us of Cheeger's isoperimetric inequality for log-concave probability measures by Kannan-Lovász-Simonovits [19] and Bobkov [3,6] where the first or second moment appears as isoperimetric constants.Kannan-Lovász-Simonovits also conjecture that the isoperimetric constant of every log-concave probability measure is controlled by the covariance matrix, which is called the KLS conjecture.We refer a reader to [11] for its history and related works and to [21] for the recent development.
We remark that the dilation inequality (1.8) can give an estimate of the µ-perimeter directly.
Combining this inequality with (1.8), we conclude We can find a similar estimate in [3] for log-concave probability measures.However, (1.19) seems different from (1.20) since (1.19) requires not only the geometric structure of K, but also the distribution.In particular, we can recover (1.20) from (1.19).In fact, by the definition of R(K), we have |x| ≤ R(K) for x ∈ ∂K, which implies that Hence (1.19) yields that and thus letting p ↓ 1, we obtain (1.20).
In Section 4, we will show the equivalence between (1.8) and Theorem 2.4 which generalizes Theorem 1.2, and as its corollaries, we will give new classes satisfying the dilation inequality (1.8).More precisely, we will discuss the stability under bounded perturbations and tensor products.
Corollary 1.7.Let µ be a probability measure supported on a symmetric convex domain Ω ⊂ R n with Ω |x| dµ(x) < +∞ and let h be a positive Borel function on Ω such that b −1 ≤ h ≤ b for some b > 1 and Ω h dµ = 1.Let ν be a probability measure on Ω given by dν = h dµ.If µ satisfies the dilation inequality for K n s (Ω) with κ > 0, then ν satisfies the dilation inequality for K n s (Ω) with the constant b −2 κ.
Corollary 1.8.Let µ 1 , µ 2 be probability measures supported on symmetric convex domains Ω 1 ⊂ R and Ω 2 ⊂ R n , respectively, with Ω 1 |x| dµ 1 , Ω 2 |x| dµ 2 < +∞.We suppose that µ 1 , µ 2 satisfy the dilation inequality for The structure of the rest of this paper is as follows.In Section 2, we introduce the class of functions including good enough symmetric quasi-convex functions and define certain derivative as a counterpart of the gradient.After that, we show the functional form of the dilation inequality which leads to Theorem 1.2.In Section 3, we give some applications which follow from Theorems 1.2 and 2.4.More precisely, we show the reverse Shannon inequality, logarithmic Sobolev inequality, Kahane-Khintchine inequality, deviation inequality and the estimate of isoperimetry.In the final section, we show the dilation inequality from the functional inequality constructed in Section 2, and confirm the equivalence between the dilation inequality and the functional form.As corollaries, we give stability results of the dilation inequality via bounded perturbations and tensorization.

Functional inequality derived from the dilation inequality
Our goal in this section is to give a functional form of the dilation inequality (1.8) and show Theorem 1.2 as its special case.
In what follows, let Ω ⊂ R n be a symmetric convex domain.We say that a function f : For instance, all convex functions are quasi-convex.Another example is |•| p on R n for p > 0 which is quasi-convex, but not convex when p ∈ (0, 1).This example also implies that quasi-convexity does not yield convexity.It is also known that a continuous function f on R is quasi-convex if and only if f is either monotone on R or there exists some point [1,Theorem 3.1]).In particular, when f is symmetric (which means that f (x) = f (−x) for any x ∈ Ω), then we have x, ∇f (x) ≥ 0, ∀x ∈ Ω since f (0) = min x∈Ω f (x) by quasi-convexity and symmetry of f .A reader is referred to [1] for more information on quasi-convexity.

Given a symmetric quasi-convex function
Since f is a nonnegative and symmetric quasi-convex function, for any ε ∈ (0, 1) and x ∈ Ω, it holds that f (x) ≥ f ( 1−ε 1+ε x), and thus that Φ f is always nonnegative.In particular, when f is continuously differentiable, we see that (2.2) We call it the gauge function of K.If K is a convex body, then the gauge function • K is exactly a norm whose closed unit ball is K.By the definition, we can immediately check that We remark that • K is not continuously differentiable at least at the origin.An advantage of the definition of (2.1) is that we may not suppose certain regularities of f and thus we can consider a non-differentiable function on the whole space like a norm.We also note that by the definition, Φ f is Borel measurable when Φ f is finite and f is continuous on Ω.
An important example belonging to QC(Ω, µ) is a norm.Indeed, we can easily check that the gauge function ) when µ has finite first moment, namely Ω |x| dµ(x) < +∞.More generally, we can ensure that QC(Ω, µ) includes good locally Lipschitz and symmetric quasi-convex functions.Here a function f : Ω → R is locally Lipschitz if for any x ∈ Ω, there exists some r > 0 such that f is Lipschitz on B(x; r) Proposition 2.1.Let µ be a probability measure supported on a bounded symmetric convex domain Ω ⊂ R n .Let f be a nonnegative, continuous and symmetric quasi-convex function on some neighborhood of Ω.If f is locally Lipschitz on Ω, then it holds that f ∈ QC(Ω, µ) and Proof.Since f is locally Lipschitz, for any x ∈ Ω, there exist some ε(x) ∈ (0, 1), r(x) > 0 and Since Ω is compact, we can take finite points where we set ε := min k=1,2,...,N ε(x k ) > 0. In particular, we obtain for any ε ∈ (0, ε) and y ∈ Ω, which ensures (2.5).Hence we enjoy f ∈ QC(Ω, µ).
In particular, by the definition, for any δ > 0 and x ∈ Ω, we can take some ε 0 ∈ (0, 1) depending on δ and x such that from which we see that Moreover, when f is a convex function instead of a quasi-convex function in Proposition 2.1, we can specify Φ f .Proposition 2.2.Let µ be a probability measure supported on a symmetric convex domain We note that when f is convex, the subdifferential of f is always nonempty on Ω, and in particular ∂f (x) = {∇f (x)} if f is differentiable at x ∈ Ω (see [33]).
Remark.The first assertion in Proposition 2.2 implies that inf y∈∂f (x) x, y is Borel measurable since Φ f is Borel measurable.
Proof.We fix x ∈ Ω, and firstly show Φ f (x) = 2 inf y∈∂f (x) x, y .By the definition of the subdifferential of f , we have for any x ∈ Ω, ε ∈ (0, 1) and y ∈ ∂f (x), which is equivalent to Hence we obtain x, y . (2.7) where we set x k := 1−ε k 1+ε k x, and take y k ∈ ∂f (x k ) for each k ∈ N. If we can take a subsequence {y k ℓ } ℓ∈N of {y k } k∈N such that y k ℓ = 0 for all ℓ ∈ N, then for any z ∈ Ω and ℓ ∈ N, we have f (z) ≥ f (x k ℓ ).Letting ℓ → +∞, we enjoy f (z) ≥ f (x) for any z ∈ Ω, which yields 0 ∈ ∂f (x).Hence we have inf y∈∂f (x) x, y = 0. On the other hand, we also see that f (x k ℓ ) = f (0) since f is symmetric and convex.Thus it follows that f (tx) = f (0) for any t ∈ [0, 1], which implies Φ f (x) = 0. Hence we have Φ f (x) = 0 = 2 inf y∈∂f (x) x, y .
Therefore we may suppose that y k = 0 for all k ∈ N. In addition, without loss of generality, we may suppose that {x k } k∈N ⊂ B(x; r/2) and B(x; r) ⊂ Ω for some r > 0, where B(x; r) := {w ∈ R n | |x − w| < r}.By the definition of the subdifferential, it holds that Here we remark that |x − z k | ≤ r/2 + |x k − x| < r, and thus z k ∈ Ω. Moreover we have z k , x k ∈ B(x; r).Hence since B(x; r) ⊂ Ω and f is continuous, we have Hence we can take a subsequence of {y k } k∈N converging to some y ∈ R n .Without loss of generality, we may suppose that lim k→+∞ y k = y.Letting k → +∞ in (2.8), we have which implies that y ∈ ∂f (x).Moreover, it follows from (2.8) that Letting k → +∞, we obtain that x, y .This is the desired assertion.Finally, (2.7) and Ω inf y∈∂f (x) x, y dµ(x) < +∞ imply that f ∈ QC(Ω, µ).
To show our main result, we firstly give the following co-area formula associated with the dilation area, which has been appeared in [32] with a more weaker form.Proposition 2.3.Let µ be a probability measure supported on a symmetric convex domain Ω ⊂ R n and let p > 0. Then for any nonnegative function f with f p ∈ QC(Ω, µ), we have (2.9) Moreover, for any positive function f with f p ∈ QC(Ω, µ), we also have We remark that µ < t} is monotone in t and µ has the finite mass.
Proof.Since f p ∈ QC(Ω, µ) and p > 0, f is a nonnegative, continuous and symmetric quasiconvex function, and thus {x ∈ R n | f (x) < λ} is a symmetric open convex set for any λ > 0, from which it holds for any λ > 0. Hence it follows from Fatou's lemma and Moreover, by (2.5) and f p ∈ QC(Ω, µ), we can justify where we used Fatou's lemma again.Hence we obtain Since we see that Φ f p = pf p−1 Φ f by the definition of Φ f and continuity of f , we can conclude (2.9).Next, we show (2.10).We remark that a := inf x∈Ω f (x) = f (0) > 0 since f > 0 on Ω and f is a symmetric quasi-convex function by p > 0.Moreover, {x ∈ R n | f (x) −1 > t} is a symmetric open convex set for any t > 0 since f is a continuous and symmetric quasi-convex function.As in the above argument, Fatou's lemma yields that Since we see that and since f p ∈ QC(Ω, µ), we can apply Fatou's lemma to see that Finally using Φ f p = pf p−1 Φ f , we obtain (2.10).
Let QC p (Ω, µ) for p > 0 be the set of all functions f on Ω such that f p ∈ QC(Ω, µ) ∩ L 1 (µ).Our main theorem in this section is the following.Theorem 2.4.Let µ be a probability measure supported on a symmetric convex domain Ω ⊂ R n .We assume that µ satisfies the dilation inequality for K n s (Ω) for some κ > 0. Then for any f ∈ QC 1 (Ω, µ), it holds In addition, when f ∈ C 1 (Ω), we obtain (1.12).
Our proof of this claim is almost same as the proof of [32,Theorem 5.3].For the completeness, we give the proof of Theorem 2.4 here.
Proof.Since f ∈ QC(Ω, µ), we can apply (2.9) with p = 1 for sublevel sets of f .It follows from (1.8), µ(Ω) = 1 and (2.9) that we have where we defined To see (2.11), without loss of generality, we may assume that Ω f dµ = 1.In fact, we know for any a > 0 and f ∈ QC 1 (Ω, µ) from which we can add the condition Ω f dµ = 1.Now, recall the dual formula of the relative entropy: for any continuous function h : R n → [0, ∞) with where C b (Ω) is the set of all bounded continuous functions on Ω (for instance, see [31,16] and their proofs).Hence, since Ent µ (µ(A) −1 1 A ) = − log µ(A) and Ω µ(A) −1 1 A dµ = 1 for any Borel subset A ⊂ Ω with µ(A) > 0, it holds that where we used Combining this with (2.12), we conclude the desired assertion.
We conclude this section by giving the proof of Theorem 1.2.

By the elementary inequality
for θ ∈ (0, 1) and x ∈ (0, 1), we can obtain which is the desired assertion.Thus we can apply Theorem 2.4 to see that Since f ℓ,m is a bounded continuous function, the lower semi-continuity of the relative entropy (which follows from (2.13)) and the monotone convergence theorem as k → +∞ imply that As we mentioned in Proposition 2.1, we have In particular, we can replace the right hand side above by 2|x||∇f (x and in particular, when f is locally Lipschitz on Ω, then we have Thus by lim ℓ,m→+∞ f ℓ,m = f and the lower semi-continuity of the relative entropy, we obtain (1.10) and (1.11).When f is C 1 , we enjoy where we used x, ∇f (x) ≥ 0 for any x ∈ Ω since f is a symmetric quasi-convex function.Using this formula directly instead of 2|x||∇f (x)|, we can also obtain (1.12).
3 Some applications of Theorem 2.4

Comparisons of the relative entropy, Wasserstein distance and variance
As the first application of Theorem 2.4, we give comparisons of the relative entropy, Wasserstein distance and the variance in the case of the Gaussian measure.We denote the standard Gaussian measure on R n by dγ n = (2π) −n/2 e − 1 2 |x| 2 dx.To see our results, we introduce the L 2 -Wasserstein distance, which appears in optimal transport theory.
Let µ, ν be probability measures on R n with finite second moment.Then the L 2 -Wasserstein distance of µ and ν is given by , where Π(µ, ν) is the set of all couplings π between µ and ν, namely π is a probability measure on It is known that W 2 is a distance function on the set of all probability measures on R n with finite second moment.We refer a reader to [33,34] for optimal transport theory and its related topics.
Then we have where dν := f dγ n .
We remark that (3.2) in particular implies that every C 1 symmetric quasi-convex function f : R n → [0, ∞) as in Proposition 3.1 satisfies and equality holds if and only if f ≡ 1 on Ω.Moreover, when the deficit of (3.4) is small such that for small enough ǫ > 0, then (3.2) and (3.3) imply that f is close to the constant 1 in the both senses of the relative entropy and the L 2 -Wasserstein distance.
We also note that we have the trivial upper bound of W 2 (γ n , ν) such as Thus (3.3) strengthen this trivial bound.
Proof.We first note that the standard Gaussian measure satisfies (1.8) for K n s (R n ) with κ = 2 (see Appendix).Hence by Theorem 2.4 (and Theorem 1.2), we have Moreover, by (3.1), integrating by parts yields that Combining these facts, we obtain (3.2).
The second assertion (3.3) immediately follows from (3.2) and Gaussian Talagrand's transportation inequality (see [33,34]) Proof of Corollary 1.3.We set f := h/γ n , then we can check that f satisfies the assumptions in Proposition 3.1.Hence applying Proposition 3.1 to f , we see that which is the desired assertion.

Cramér-Rao inequality and logarithmic Sobolev inequality
From Theorem 2.4 (and Theorem 1.2), we can obtain the following logarithmic Sobolev type inequality or Cramér-Rao type inequality, which includes Proposition 1.4.Proposition 3.2.Let µ and Ω be as in Theorem 2.4 and let f ∈ L 1 (µ) be a nonnegative, locally Lipschitz and symmetric quasi-convex function.Then it holds that Proof.Let f be a function satisfying our assumptions.Then by Theorem 1.2, we see that Then (3.5) follows by combining this with the Cauchy-Schwarz inequality, and (3.6) follows from (3.5) and the arithmetic-geometric mean inequality.
As we described in our introduction, (3.5) is close to the logarithmic Sobolev inequality, and exactly gives We emphasize that the constant of (3.5) is given only by κ and Ω |x| 2 f (x) dµ(x).
The logarithmic Sobolev inequality also appears in [32] where we need the Poincaré constant of µ.
On the other hand, (3.6) is close to the defective logarithmic Sobolev inequality.Here we say that a probability measure µ on R n satisfies the defective logarithmic Sobolev inequality with constants ρ > 0 and τ ≥ 0 if for any nonnegative locally Lipschitz function f on Ω.We refer a reader to [2] for details of the defective logarithmic Sobolev inequality.In our case, when Ω is bounded, since Ω is symmetric, (3.6) implies that In particular, when Ω is an interval in R, we also obtain the logarithmic Sobolev inequality associated with the Poincaré constant of µ.Here we say that µ satisfies the Poincaré inequality with constant C µ > 0 if for any locally Lipschitz function f on Ω with Ω f dµ = 0.
Corollary 3.3.Let µ be a symmetric probability measure on a bounded symmetric open interval I ⊂ R and f : I → R be a locally Lipschitz function.Suppose that µ satisfies the dilation inequality for K 1 s (I) with κ > 0 and the Poincaré inequality with C µ > 0. In addition, we suppose that f is odd and monotone function with f ∈ L 2 (µ).Then it holds that In particular, when dµ = e −ϕ(x) dx is log-concave, then we have (3.9) Proof.First we remark that f 2 is a nonnegative, locally Lipschitz and symmetric quasi-convex function.Indeed, since |f | 2 is decreasing on I ∩ (−∞, 0] and increasing on I ∩ [0, ∞) by the monotonicity of f , |f | 2 is quasi-convex.Moreover |f | 2 is locally Lipschitz and symmetric since f is locally Lipschitz and odd.Applying Proposition 3.2, in particular (3.7), to f 2 , we obtain On the other hand, since f is odd and µ is symmetric from which we have I f dµ = 0, we can apply the Poincaré inequality to f to see that Therefore we enjoy which implies (3.8).
For the second assertion, we employ the result by Bobkov [3] where it is shown that every log-concave probability measure µ = e −ϕ dx on R satisfies Cheeger's isoperimetric inequality with the constant 2e −ϕ(m) , where m ∈ I is the median of µ.In our case, since µ is symmetric, we can take m as 0. Hence, Cheeger's inequality [13] (see also [11,Theorem 14.1.6])implies that C µ ≥ e −2ϕ(0) .(3.9) follows from combining (3.8) with the bound of the Poincaré constant and κ = 2.The latter follows from every symmetric log-concave probability measure satisfying (1.8) with κ = 2 (see Appendix).

Kahane-Khintchine inequalities and deviation inequalities
In this subsection, we consider deviation inequalities as described in Corollary 1.5.To see this, we give the following moment estimate for positive exponent which is a generalization of the comparison result of moments for log-concave probability measures, firstly discussed by Borell [9] (see also [29,18,28]).Proposition 3.4.Let µ and Ω be as in Theorem 2. 4 and p for any 1 ≤ p ≤ q < p 0 , where In particular, if f is in C 1 (Ω) and satisfies then we have for any 1 ≤ p ≤ q. Proof.Set then we see that Since f ∈ QC t (Ω, µ) for any 1 ≤ t < p 0 , it follows from Theorem 2.4 that Here we used the fact that Φ f t (x) = 0 for any t > 0 if f (x) = 0 at x ∈ Ω, which follows from f t ∈ QC(Ω, µ).Hence integrating the above inequality from p to q with 1 ≤ p ≤ q < p 0 yields the desired assertion (3.10).
(3.12) also follows by the same proof above and by (2.2).
For instance, when µ is log-concave, the gauge function 11), and we can check ( Hence, (3.10) yields for any 1 ≤ p ≤ q since all log-concave probability measures satisfy the dilation inequality with κ = 1.In particular, when µ is symmetric on R, since we can take κ = 2 as we see in Appendix, we also obtain for any 1 ≤ p ≤ q.It is known that the order of q/p above is optimal (for instance, see [18]).On the other hand, it is known that all log-concave probability measures on R n satisfy for any 1 ≤ p ≤ q, where C > 0 is an absolute constant (see [29,11,18]).We remark that the similar inequality for general functions has already appeared in [32] (see also [5,15]).More precisely, in [32], we need the Remez function to construct the moment comparison like (3.10).For s ≥ 1, we define u f (s) ≥ 1 by the best constant C ≥ 1 such that We call the function the Remez function of f , and set Then it follows from [32, Corollary 5.7] that for any nonnegative integrable function f with u ′ f (1) < +∞ and for any 1 ≤ p ≤ q.We remark that f holds when f is a continuous and symmetric quasi-convex function.Indeed by the definition of u f , we have As a corollary of Proposition 3.4, we give a tail estimate of a measure.To see this, we introduce the Orlicz norm • ψα for α ≥ 1.Given any α ≥ 1 and Borel function f : Ω → R, we set It is known that the Orlicz norm • ψα is also given by L p -norms for p ≥ α (see [11,Lemma 2.4.2]).
Lemma 3.5.Let α ≥ 1 and f : Ω → R be a Borel function.Then Here A ≃ B means that there exist some absolute constants c, C > 0 such that cB ≤ A ≤ CB.
By Proposition 3.4, we obtain the following estimate of some Orlicz norm and the deviation inequality.
Corollary 3.6.Let µ and Ω be as in Theorem 2.4 and let f be a nonnegative function on Ω satisfying (3.11).We set .
If 1 ≤ α < +∞, then it holds that In addition, we have where C > 0 is an absolute constant.
On the other hand, it holds that Since direct calculations yield (1 − t 0 p) − 1 p +t 0 ≤ e t 0 for t 0 > 0 and any 0 < p < t −1 0 , we can obtain the desired assertion.
Guédon [17] (see also [11,Theorem 2.4.9])showed that every log-concave probability measure and norm x q dµ 1/q , for any −1 < q < 0, where C > 0 is an absolute constant.Hence Proposition 3.7 is a generalization of Guédon's result in some sense.An extension of Guédon's result for general functions is also discussed in [8,15].We also remark that Guédon's result follows from the small ball estimate, which is shown by Lata la [22].Similarly, we can show the deviation inequality around the origin.
Corollary 3.8.Let µ and Ω be as in Theorem 2.4.Let f be a positive, continuous and symmetric quasi-convex function with Suppose that f also satisfies f p ∈ QC(Ω, µ) and f −p ∈ L 1 (µ) for any 0 < p < 1/β.Then for any small enough ε > 0, it holds that This implies the desired assertion by p = 1 β − ε.

µ-perimeter
Our goal in this subsection is to give the estimate of the µ-perimeter of K ∈ K n s (Ω) described in Corollary 1.6.
Proof of Corollary 1.6.Without loss of generality, we may suppose that µ(K) = µ(K).We set for ε > 0, , where we used Hölder's inequality.Furthermore, since we have by r(K)B n 2 ⊂ K, Since K is bounded and ϕ is smooth, Fatou's lemma yields that lim inf Moreover, since we enjoy the divergence theorem implies that lim inf Therefore, letting ε ↓ 0 in (3.18), we obtain Since lim ε↓0 f ε = 1 R n \K and µ(K) = µ(K), the lower semi-continuity of the relative entropy yields that x, η(x) |x| p ′ e −ϕ(x) dσ K (x) which implies the desired assertion.
4 Revisit to the dilation inequality

Reconstruction
In Section 2, we investigated the functional form of the dilation inequality.In this section, conversely, we will confirm that the dilation inequality can be recover from the functional inequality (2.11).
For K ∈ K n s (R n ), we define a function N K : R n → [0, ∞) by Then we can easily check that N K is a continuous, symmetric and quasi-convex function on R n .
Then µ satisfies the dilation inequality (1.8) for K n s (Ω) with the constant κ.
To see this, we shall justify (2.5) for f σ .For any ε ∈ (0, 1) and x ∈ R n , it holds where we used 1+ε 1−ε K = K ε .Hence for any ε ∈ (0, σ), we have Next for x ∈ K σ , we have In particular, for any x ∈ R n , we have Therefore summarizing our arguments above, we can conclude that for any x ∈ R n and ε ∈ (0, σ), which ensures (2.5) for f σ since we can take some constant C > 0 such that • K ≤ C|•| and since we have Ω |x| dµ < +∞.Hence we could check f σ ∈ QC 1 (Ω, µ).Moreover, we can enjoy Hence we can apply (2.11) to f σ to see In the right hand side, since it holds that for any small enough τ > 0, where we used K σ ⊂ K (1+τ )σ and µ(K) = µ(K) in the last inequality.Hence we have lim inf for any small enough τ > 0, and thus lim inf On the other hand, since it holds that f σ → 1 R n \K as σ ↓ 0, it follows from the lower semicontinuity of Ent µ that lim inf by µ(K) = µ(K).Eventually, we have which is the desired assertion.

Applications
As a corollary of Theorem 4.1, we obtain the stability of the dilation inequality for bounded perturbations, which is described in Corollary 1.7.To show this corollary, we employ the following lemma.
Here we set φ(r) := r log r for r > 0.
As another consequence of Theorem 4.1, we can observe the tensorization property in the special case, which is described in Corollary 1.8.We remark that Ω 1 × Ω 2 is also a symmetric convex domain in R n+1 .
Remark.It is natural to expect the tensorization property for high dimensional spaces.More precisely, if µ 1 and µ 2 are probability measures on R n 1 and R n 2 satisfying dilation inequalities for K n 1 s (R n 1 ) and K n 2 s (R n 2 ), respectively, then does µ 1 ⊗ µ 2 also satisfy the dilation inequality for K n 1 +n 2 s (R n 1 +n 2 )? Corollary 1.8 gives a partial answer affirmatively when either n 1 or n 2 is 1, but it is open when n 1 , n 2 ≥ 2. In our argument, this difficulty comes from quasi-convexity.In fact, let f 1 and f 2 be nonnegative symmetric quasi-convex functions on R n .Then in our proof of Corollary 1.8, we used the fact that f 1 + f 2 is also a symmetric quasi-convex function when n = 1.However, when n ≥ 2, the same phenomenon fails.For instance, let us consider functions f 1 (x 1 , x 2 ) = |x 1 | 2/3 and f 2 (x 1 , x 2 ) = |x 2 | 2/3 for (x 1 , x 2 ) ∈ R 2 .Then we can check that both functions are symmetric and quasi-convex, but f 1 + f 2 is not quasi-convex on R 2 (the curve {(x 1 , x 2 ) ∈ R 2 | f 1 (x 1 , x 2 ) + f 2 (x 1 , x 2 ) = 1} is the astroid).
In our proof of Corollary 1.8, we also showed the tensorization property of the functional dilation inequality (2.11).If we focus on this tensorization, we can improve the functional version of Corollary 1.8 in the special case.

A Appendix
Here we will investigate the dilation inequality for symmetric log-concave probability measures on R and the standard Gaussian measure γ n on R n .Proposition A.1.Every symmetric log-concave probability measure on R satisfies the dilation inequality for K 1 s (R) with κ = 2.
Proposition A.2.The standard Gaussian measure γ n satisfies the dilation inequality for K n s (R n ) with κ = 2.
We also remark that κ = 2 is optimal in Proposition A.