Grounded Lipschitz functions on trees are typically flat

A grounded M-Lipschitz function on a rooted d-ary tree is an integer-valued map on the vertices that changes by at most along edges and attains the value zero on the leaves. We study the behavior of such functions, specifically, their typical value at the root v_0 of the tree. We prove that the probability that the value of a uniformly chosen random function at v_0 is more than M+t is doubly-exponentially small in t. We also show a similar bound for continuous (real-valued) grounded Lipschitz functions.


Introduction
This note studies the typical behavior of grounded Lipschitz functions on trees.For an integer M ≥ 1, call an integer-valued function f on the vertices of a graph M -Lipschitz if |f (u)−f (v)| ≤ M for every two adjacent vertices u and v.We consider the rooted d-ary tree of depth k, that is, the tree whose each non-leaf vertex, including the root, has d children.We denote this tree by T (d, k) and its root by v 0 .Let L M (d, k) be the set of all M -Lipschitz functions on T (d, k) which take the value zero on all leaves.For example, L M (d, 1) has 2M + 1 elements, one element for each of the possible values of f (v 0 ) between −M and M .
The behavior of M -Lipschitz functions on trees and on regular expander graphs was studied in [7], where "flatness" of typical M -Lipschitz functions was proved for relatively small values of M (depending on the degree and expansion properties of the graph).For trees, the result of [7] states that if for some small absolute constant c, then a typical element of L M (d, k) is very flat in the sense that on all but a tiny fraction of the vertices it takes values between −M and M .Precisely, it was shown there (see [7,Theorem 1.7]) that under the assumption (1.1), if we let f be a uniformly chosen random function in L M (d, k) then for every integer s ≥ 1 and vertex v ∈ T (d, k), .
In this work we study further the distribution of the value at the root vertex v 0 for a uniformly chosen function in L M (d, k).Our main result is that this value is very tightly concentrated around 0 for every M and d ≥ 2, regardless of whether the assumption (1.1) is satisfied.
Moreover, if d ≥ 3, then the constant 9/10 above may be replaced by (3/4) d .In addition, Of course, by symmetry, the theorem implies a corresponding bound on Pr(f (v 0 ) = −M − t).We prove Theorem 1.1 by induction on k.The argument has three steps.The first step establishes that p(t) := Pr(f (v 0 ) = t) is unimodal in t with maximum at t = 0.The second step shows that p(t) decays at least exponentially in t/M , i.e., that p(t + M ) ≤ 9p(t)/10 for t ≥ 1.In the third step, inequality (1.2) is derived by induction on t.
As a second result, we conclude that Theorem 1.1 remains valid when we let f be a uniformly chosen continuous (i.e., real-valued) grounded Lipschitz function.Formally, we call a real-valued function f on the vertices of a graph Lipschitz if |f (u) − f (v)| ≤ 1 for every pair u, v of adjacent vertices.Let L ∞ (d, k) be the family of all such Lipschitz functions on T (d, k) that take the value zero on all leaves.Theorem 1.2.Let d ≥ 2 and k ≥ 1.If f is chosen uniformly at random from L ∞ (d, k), then for every x > 0, Pr(f Moreover, if d ≥ 3, then the constant 9/10 above may be replaced by (3/4) d .
We also mention briefly a related model, the model of random graph homomorphisms.An integer-valued function f on the vertices of a graph is called a graph homomorphism (or a homomorphism height function) if |f (u) − f (v)| = 1 for every pair u, v of adjacent vertices.The study of the typical properties of a uniformly chosen random graph homomorphism was initiated in [1] and results were subsequently obtained for: tree-like graphs [1], the hypercube [3,4], and (the nearest-neighbor graph on finite boxes in) the d-dimensional integer lattice Z d for large d [5].A lower bound on the typical range of random graph homomorphisms on general graphs was established in [2].Graph homomorphisms are similar to 1-Lipschitz functions (see the Yadin bijection described in [5] for a precise connection) and a result analogous to our Theorem 1.1 was proved for them in [1], using a similar, though somewhat simpler, method.
We end the introduction with a discussion of the role of the assumption (1.1).In a forthcoming paper [6] we will show that under this assumption, more is true of a uniformly chosen function from L M (d, k).In fact, at a vertex which is at even distance from the leaves, the function will take the value 0 with high probability, namely, with probability at least 1 − 2 exp(−cd/M ) for some absolute constant c > 0. That is, the function value is concentrated on a single number.Note that this implies that for a vertex at odd distance from the leaves, with a similarly high probability, the function value at all of its neighbors is zero, and that conditioned on this event, the value at the vertex is uniform on {−M, . . ., M }.
We expect that such a strong concentration of the values of the random function fails when M ≫ d.More precisely, let us fix M, d such that M ≫ d and denote by L k the distribution of the value at the root for a tree of depth k.We believe that L k is no longer concentrated on a single value when k is even.Moreover, we suspect that L k has a limit as k tends to infinity (so that there is asymptotically no distinction between even and odd depths).It would be interesting to establish such a transition phenomenon between the cases M ≪ d and M ≫ d.

From unimodality to doubly exponential decrease
In this section, we prove Theorems 1.1 and 1.2.Fix integers d ≥ 2 and M ≥ 1.For integers t and k ≥ 1, we let Several times in our proofs, we will use the fact that G(t, k) = G(−t, k) for every t and k, which follows by symmetry.

A recursive formula
Since for every k > 1, the children of the root of T (d, k) can be regarded as roots of isomorphic copies of T (d, k − 1), we have (2.1)

Unimodality
The following claim establishes unimodality.
Proof.We prove the claim by induction on k.
Assume that k > 1 and t ≥ 0. By (2.1), it suffices to show that To see this, we consider two cases.First, if t − M ≥ 0, then (2.2) follows directly from the inductive assumption.Otherwise, we note that t + M + 1 ≥ M − t > 0 and hence by symmetry and induction,

Exponential decay of G(t, k)
The following claim establishes exponential decay of G(t, k).We start with the case d ≥ 3. The case d = 2 is more elaborate and we handle it separately later on.
Proof.Fix some d ≥ 3. We prove the lemma by induction on k.Suppose that t ≥ 1.If k = 1, then G(t + M, k) = 0 and the claimed inequality holds vacuously.Assume that k > 1.To simplify the notation, we let Moreover, for a set S ⊆ Z, we let If t > M , then by the inductive assumption, G(t Assume that 1 ≤ t ≤ M , let A = G({0, . . ., t − 1}), B = G({1, . . ., M − t}), and C = G({t, . . ., t + M }), and observe that, by (2.1) and symmetry, On the other hand, since It therefore follows that where the final inequality holds by our assumption that d ≥ 3.
Remark 2.3.It follows from the proof of Lemma 2.2 that even when d = 2, the statement of the lemma still holds as long as we replace the constant (3/4) d with some constant α < 1 that satisfies Unfortunately, the smallest solution to (2.4) tends to 1 as M → ∞.We thus need a more careful analysis to handle the case d = 2. Our proof in the case d = 2 shall require a mild lower bound on M .We therefore note that if M ≤ 10 and α = 9/10, then (2.4) is satisfied.
We are going to prove the following stronger statement by induction on k: (2.5) If k = 1, then (2.5) holds vacuously as G(t + M, k) = 0 for every t ≥ 1. Assume that k > 1 and fix some t ≥ 1.To simplify notation, for every s ∈ Z, we let and for a set S ⊆ Z, If t > M , then by the inductive assumption and (2.1), we have Assume that 1 ≤ t ≤ M , let and observe that by (2.1) and symmetry, We split the proof into two cases, depending on the value of t.
Case 1: t ≥ m.Identity (2.1), the inductive assumption, and Claim 2.1 imply that (recall (2.3)) It hence follows that where in the last inequality we used the assumption that M ≥ 11.
Case 2: t < m.Identity (2.1), the inductive assumption, and Claim 2.1 imply that (recall (2.3)) We now further split into two cases, depending on whether or not the following inequality is satisfied: Consequently, by (2.6), where in the last inequality we again used the assumption that M ≥ 11.If (2.8) does not hold, then we let D = G({t, . . ., m − 1}) and E = G({1, . . ., M }).
Identity (2.1), Claim 2.1, and the converse of (2.8) imply On the other hand, again by symmetry and Claim 2.1, we have that

The full bound
The exponential decay established in Lemmas 2.2 and 2.4 easily implies our main theorems.
Proof.We prove the statement by induction on k.If k = 1, then G(t + M, k) = 0. Assume that k > 1.When 1 ≤ t ≤ M , we have ⌊(t − 1)/M ⌋ = 0, and (2.9) follows directly from Lemmas 2.2 and 2.4.If t ≥ M + 1, then by the inductive assumption and (2.1) we have Proof of Theorem 1.2.Let f be a uniformly chosen random element of L ∞ (d, k) and let x > 0. The claimed bound on the probability that f (v 0 ) exceeds 1 + x follows fairly easily from Theorem 1.1.
To see this, let, for every positive integer M , f M be a uniformly chosen random element of L M (d, k).
A moment of thought reveals that the sequence f M /M converges to f in distribution.Indeed, letting V be the set of internal (non-leaf) vertices of T (d, k), one may naturally view L ∞ (d, k) as a convex polytope P ⊆ R V .Let µ and µ M be the distributions of f and f M /M , respectively.Observe that µ = λ/vol(P ), where λ is the |V |-dimensional Lebesgue measure, and that µ M is the uniform measure on the (finite) set P ∩ ( 1 M Z) V .Since P is compact (as clearly P ⊆ [−k, k] V ), every continuous function g : P → R is uniformly continuous and therefore, lim M →∞ P g dµ M = P g dµ.Now, Theorem 1.1 implies that for M ≥ 1/x, letting α be as in the statement of Corollary 2.5, where in the last inequality we used the fact that α d r /2 ≤ (9/10) 2 r−1 , and hence

Proof of Theorem 1 . 1 .
The first part of the theorem follows immediately from the corollary above.The upper bound on Pr(f (v 0 ) = 0) follows from the inequality G(M, k) ≥ 2 −d G(0, k), which we prove below, together with symmetry and Claim 2.1.If k = 1, we have G(M, k) = G(0, k) = 1.If k > 1, identity (2.1) and symmetry imply that