Bi-log-concavity: some properties and some remarks towards a multi-dimensional extension

Bi-log-concavity of probability measures is a univariate extension of the notion of log-concavity that has been recently proposed in a statistical literature. Among other things, it has the nice property from a modelisation perspective to admit some multimodal distributions, while preserving some nice features of log-concave measures. We compute the isoperimetric constant for a bi-log-concave measure, extending a property available for log-concave measures. This implies that bi-log-concave measures have exponentially decreasing tails. Then we show that the convolution of a bi-log-concave measure with a log-concave one is bi-log-concave. Consequently, infinitely differentiable, positive densities are dense in the set of bi-log-concave densities for $L_p-$norms, $p \in [1;+\infty]$. We also derive a necessary and sufficient condition for the convolution of two bi-log-concave measures to be bi-log-concave. We conclude this note by discussing ways of defining a multi-dimensional extension of the notion of bi-log-concavity. We propose an approach based on a variant of the isoperimetric problem, restricted to half-spaces.


Introduction
Bi-log-concavity (of a probability measure on the real line) is a property recently introduced by Dümbgen, Kolesnyk and Wilke ( [DKW17]), that aims at bypassing some restrictive aspects of log-concavity while preserving some of its nice features.More precisely, bi-log-concavity amounts to log-concavity of both F and 1 − F and a simple application of Prékopa's theorem on stability of logconcavity through marginalization ( [Pré73], see also [SW14] for a discussion on the various proofs of this fundamental theorem) shows that log-concave measures are also bi-log-concave (see [BB05] for a more direct, elementary proof of this latter fact).
From a modelisation perspective, bi-log-concavity and log-concavity may be seen as shape constraints.In statistics, when they are available, shape constraints represent an interesting alternative to more classical parametric, semiparametric or non-parametric approaches and constitute an active contemporary line of research ( [Wal09,Sam18]).Bi-log-concavity was indeed proposed in the aim to contribute to this research area ( [DKW17]).It was used in [DKW17] to construct efficient confidence bands for the cumulative distribution function and some functionals of it.The authors highlight that bi-log-concave measures admit multi-modal measures while it is well-known that log-concave measures are unimodal.Furthermore, Dümbgen et al. [DKW17] establish the following characterization of bi-log-concave distributions.For a distribution function F , denote J (F ) ≡ {x ∈ R : 0 < F (x) < 1} and call "non-degenerate", the functions F such that J (F ) = ∅.
Theorem 1 (Characterization of bi-log-concavity, [DKW17]) Let F be a non-degenerate distribution function.The following four statements are equivalent: (i) F is bi-log-concave, i.e.F and 1 − F are log-concave functions in the sense that their logarithm is concave.
(ii) F is continuous on R and differentiable on J (F ) with derivative f = F ′ such that, for all x ∈ J(F ) and t ∈ R, (iii) F is continuous on R and differentiable on J (F ) with derivative f = F ′ such that the hazard function f /(1 − F ) is non-decreasing and reverse hazard function f /F is non-increasing on J(F ).
(iv) F is continuous on R and differentiable on J (F ) with bounded and strictly positive derivative Note that if one includes degenerate measures -that is Dirac masses -it is easily seen that the set of bi-log-concave measures is closed under weak limits.
Just as s-concave measures generalize log-concave ones, Laha and Wellner [LW17] proposed the concept of bi-s * -concavity, that generalize bi-log-concavity and that include s-concave densities.Some characterizations of bi-s * -concavity, that extend the previous theorem, are derived in [LW17].
On the probabilistic side, even if some characterizations are available, many important questions remain about the properties of bi-log-concave measures.Indeed, log-concave measures satisfy many nice properties (see for instance [Gué12,SW14,Col17] and references therein) and it is natural to ask whether some of those are extended to bi-log-concave measures.Answering this question is the primary object of this note.
We show in Section 2 that the isoperimetric constant of a bi-log-concave measure is simply equal to two times the value of its density with respect to the Lebesgue measure -that indeed exists -at its median, thus extending a property available for log-concave measures.We deduce that a bi-log-concave measure has exponential tails, also extending a property valid in the log-concave case.
In Section 3, we show that the convolution of a log-concave measure and a bi-log-concave measure is bi-log-concave.As a consequence, we get that any bi-log-concave measure can be approximated by a sequence of bi-log-concave measures having regular densities.Furthermore, we give a necessary and sufficient condition for the convolution of two bi-log-concave measures to be bi-logconcave.
Finally, we discuss in Section 3.1 possible ways to obtain a multivariate notion of bi-log-concavity.This problem is not a priori obvious, because the definition of bi-log-concavity in one dimension relies on the cumulative distribution function and so, on the total order existing on real numbers.To this end, we derive a characterization of (symmetric) bi-log-concave measures on R through their isoperimetric profile.Then we propose a multidimensional generalization for symmetric measures by considering their isoperimetric profile, restricted to half spaces.We conclude by discussing a way to strengthen the latter definition in order to ensure stability through convolution by any log-concave measure.The question of providing a nice definition of bi-log-concavity in higher dimension, that would also impose existence of some exponential moments, remains open.

Isoperimetry and concentration for bi-logconcave measures
Let F (x) = µ ((−∞, x]) be the distribution function of a probability measure µ on the real line.Assume that µ is non-degenerate (in the sense of its distribution function being non-degenerate) and let f be the density of its absolutely continuous part.
Recall the following formula for the isoperimetric constant Is (µ) of µ, due to Bobkov and Houdré [BH97], The following theorem extends a well-known fact related to isoperimetric constant for log-concave measure to the case of bi-log-concave measures.
Theorem 2 Let µ be a probability measure with non-degenerate distribution function F being bi-log-concave.Then µ admits a density f = F ′ on J (F ) and it holds where m is the median of µ.
In general, the isoperimetric constant is hard to compute, but in the bilog-concave case Theorem 2 provides a straightforward formula, that extends a formula valid for log-concave measures (see for instance [SW14]).
In the following, we will also use the notation J (F ) = (a, b).Proof.Note that the median m is indeed unique by Theorem 1 above.For x ∈ (a, m], Thus, I F is non-decreasing on [m, b).Consequently, the maximum of I F (x) is attained on m and its value is Corollary 3 Let µ as above be a bi-log-concave measure with median m.Then f (m) > 0 and µ satisfies the following Poincaré inequality: for any square integrable function where Var µ (f ) = f 2 dµ − f dµ 2 is the variance of f with respect to µ.Consequently, µ has bounded Ψ 1 Orlicz norm and achieves the following exponential concentration inequality, where α µ is the concentration function of µ, defined by As it is well-known (see [Led01] for instance), inequality (2) implies that for any 1−Lipschitz function f , Proof.The fact that f (m) > 0 is given by point (iii) of Theorem 1 above.Then Inequality (1) is a consequence of Theorem 2 via Cheeger's inequality for the first eigenvalue of the Laplacian (see for instance Inequality 3.1 in [Led01]).Inequality (2) is a classical consequence of Inequality (1) as well (see Theorem 3.1 in [Led01]).
Note that, following Bobkov [Bob96], for a log-concave probability measure µ on R having a positive density f on J (F ), the function

Stability through convolution
Take X and Y two independent random variables with respective distribution functions F X and F Y that are bi-log-concave.Hence X and Y have densities, denoted by f X and f Y .Then Proposition 5 If X is bi-log-concave, Y is log-concave and X is independent from Y , then X + Y is bi-log-concave.
Proof.By using formulas (3) and (4), this is a direct application of Prékopa's theorem ( [Pré73]) on the marginal of a log-concave function.
Corollary 6 Take a (non-degenerate) bi-log-concave measure on R, with density f .Then there exists a sequence of infinitely differentiable bi-log-concave densities, positive on R, that converge to f in L p (Leb), for any p ∈ [1, +∞] .
Corollary 6 is also an extension of an approximation result available in the set of log-concave distributions, see [SW14, Section 5.2].Proof.It suffices to consider the convolution of f with a sequence of centered Gaussian densities with variances converging to zero.As f has an exponential moment, it belongs to any L p (Leb), p ∈ [1, +∞] .Then a simple application of classical theorems about convolution in L p (see for instance [Rud87, p. 148]) allows to check that the approximations converge to f in any More generally, the following theorem gives a necessary and sufficient condition for the convolution of two bi-log-concave measures to be bi-log-concave.
Theorem 7 Take Xand Y two independent bi-log-concave random variables with respective densities f X and f Y and cumulative distribution functions F X and F Y .Denote w (x, y) = f Y (y) F X (x − y) and w (x, y) = f Y (y) (1 − F X ) (x − y) and consider for any x∈ J (F X+Y ), the following measures on R, Then X + Y is bi-log-concave if and only if for any x∈ J (F X+Y ), Of course, a simple symmetrization argument shows that conditions (5) and (6) are satisfied if (− log f Y ) ′′ ≥ 0 pointwise, which means that f Y is log-concave, in which case we recover Proposition 5 above.But Theorem 7 is more general.Indeed, it is easily checked by direct computations that the convolution the Gaussian mixture 2 −1 N (−1.34, 1) + 2 −1 N (1.34, 1) -which is bi-log-concave but not log-concave, see [DKW17, Section 2] -with itself is bi-log-concave.
To prove Theorem 7, we will use the following lemma.
Lemma 8 Take p, q ∈ [1, +∞] such that p −1 + q −1 = 1 and a measure ν on R with density f = exp (−φ) absolutely continuous and f ′ ∈ L p (ν).Take g ∈ L q (ν) Lipschitz continuous such that g ′ ∈ L 1 (ν) and Proof of Lemma 8.This a simple integration by parts: from the assumptions, we have Proof of Theorem 7. Recall that we have Our first goal is to find some conditions such that F X+Y is log-concave.It is sufficient to prove that, for any x ∈ J (F X+Y ), or equivalently, Denote ρ X = (log F X ) ′ .We have Furthermore, we get Gathering the equations, we get which gives condition (5).Likewise condition (6) arises from the same type of computations when studying log-concavity of (1 − F X+Y ).

Towards a multivariate notion of bi-log-concavity
Let us introduce this section with the following remark.The isoperimetric profile I µ is defined as follows: for any p ∈ (0, 1), Note that the isoperimetric profile I µ depends on the distance δ that is considered.Unless explicitly mentioned, we will consider in the following that the distance δ is the Euclidean distance.From inequality (2.1) in [Bob96], we have for a log-concave measure µ on R and any h > 0, min Hence, If µ is moreover symmetric, then I µ (p) = f F −1 (p) for any p ∈ (0, 1).
For a general measure µ, we define the isoperimetric profile restricted to half-spaces I H µ : for any p ∈ (0, 1), For a measure µ on R having a density f , one has Furthermore, as previously remarked, I µ ≡ I H µ in the one-dimensional logconcave case.The latter identity is still true in higher dimension for the Gaussian measure when the distance is given by the Euclidean norm (see [Bor75]) and this characterizes Gaussian measures.In general, it also holds I µ ≤ I H µ pointwise.
Take the distance δ to be given by the sup-norm, δ (x, y) = x − y ∞ .Then, for any set A ⊂ R, we have n .In this case, Bobkov [Bob96, Theorem 1.1] characterizes symmetric log-concave measures for which I H µ = I µ .Reverse relation in higher dimension between I µ and I H µ when µ is logconcave, is related to the so-called KLS-hyperplane conjecture (see for instance [LV17,LV18]).
A possible extension of the notion of bi-log-concavity is the following.
Definition 9 Let µ be a probability measure on R d , d ≥ 1. Assume that µ is symmetric around the origin.Then µ is said to be weakly bi-log-concave (with respect to the distance δ) if the function is non-increasing on (0, 1).
The latter definition extends the definition of bi-log-concavity for symmetric measures on the real line.However, we consider that the definition is "weak" since, as we will see, it seems in fact natural to ask for more.In the following, a symmetric measure is a measure that is symmetric around the origin.
Proposition 10 Symmetric log-concave measures on R d are bi-log-concave (for the Euclidean distance).
Proof.Take u a unit vector and consider the measure µ u defined to be the projection of the measure µ on the line containing 0 and directed by u.Consequently, µ u is a log-concave measure on R, symmetric around zero.Hence I µu (p) is concave and consequently, I µu (p) /p is nonincreasing.Since halfspaces are parameterized by unitary vectors together with a point on the line containing zero and directed by the considered unitary vector, this readily gives the nonincreasingness of I µ (p) /p.
One can notice that the latter proof is in fact only based on stability of log-concavity through one-dimensional marginalizations.This naturally leads to the following second definition of bi-log-concavity in higher dimension.
Definition 11 Let µ be a probability measure on R d , d ≥ 1.Then µ is said to be weakly− * bi-log-concave if for every line ℓ ⊂ R d , the (Euclidean) projection measure µ ℓ of µ onto the line ℓ is a (one-dimensional) bi-log-concave measure on ℓ (that can be possibly degenerate).More explicitly, for any x ∈ ℓ and any Borel set B ⊂ R, where u is a unit directional vector of the line ℓ.
Note that weakly− * bi-log-concave measures are not necessarily symmetric.In the case of symmetric measures, the notion of weakly− * bi-log-concavity is actually a strengthening of Definition 9.
Proof.By parametrization of half-spaces, we have the following formula, for any p ∈(0, 1), Then the conclusion follows by noticing that for any line ℓ such that 0 ∈ ℓ, the projection measure µ ℓ of µ is symmetric and bi-log-concave.Hence, I H µ ℓ (p) /p is non-increasing on (0, 1) and so is I H µ (p) /p.The following result states that the notion of weakly− * bi-log-concavity is stable through convolution by log-concave measures.
Proposition 13 The convolution of a log-concave measure with a weakly− * bi-log-concave one is weakly− * bi-log-concave.
Proof.The formula (X + Y ) • u = X • u + Y • u shows that the projection of the convolution of two measures on a line is the convolution of the projections of measures on this line.This allows to reduce the stability through convolution by a log-concave measure to dimension one and concludes the proof.
As for the log-concave case, it is moreover directly seen that weakly and weakly− * log-concavity are stable by affine transformations of the space.
Actually, in addition to containing log-concave measures and being stable through convolution by a log-concave measure, there are at least two other properties that one would naturally require for a convenient multidimensional concept of bi-log-concavity: existence of a density with respect to the Lebesgue measure on the convex hull of its support and existence of a finite exponential moment for the (Euclidean) norm.We can express this latter remark through the following open problem, that concludes this note.
Open Problem: Find a nice characterization of probability measures on R d that are weakly− * bi-log-concave, that admit a density with respect to the Lebesgue measure on the convex hull of their support and whose Euclidean norm has exponentially decreasing tails.