Resistance growth of branching random networks

Consider a rooted infinite Galton-Watson tree with mean offspring number $m>1$, and a collection of i.i.d. positive random variables $\xi_e$ indexed by all the edges in the tree. We assign the resistance $m^d \xi_e$ to each edge $e$ at distance $d$ from the root. In this random electric network, we study the asymptotic behavior of the effective resistance and conductance between the root and the vertices at depth $n$. Our results generalize an existing work of Addario-Berry, Broutin and Lugosi on the binary tree to random branching networks.


Introduction
An electric network is an undirected locally finite connected graph G = (V, E) with a countable set of vertices V and a set of edges E, endowed with nonnegative numbers {r(e), e ∈ E}, called resistances, that are associated to the edges of G.The reciprocal c(e) = 1/r(e) is called the conductance of the edge e.It is well-known that the electrical properties of the network (G, {r(e)}) are closely related to the nearest-neighbor random walk on G, whose transition probabilities from a vertex are proportional to the conductances along the edges to be taken.See, for instance, the book of Lyons and Peres [11] for a detailed exposition of this connection.
To study random walks in certain random environments, it is natural to consider a random electric network by choosing the resistances independent and identically distributed.For example, the infinite cluster of bond percolation on Z d can be seen as a random electric network in which each open edge has unit resistance and each closed edge has infinite resistance.Grimmett, Kesten and Zhang [7] proved that when d ≥ 3, the effective resistance of this network between a fixed point and infinity is a.s.finite, thus the simple random walk on this infinite percolation cluster is a.s.transient.In [3], Benjamini and Rossignol considered a different model of the cubic lattice Z d , where the resistance of each edge is an independent copy of a Bernoulli random variable.They showed that point-to-point effective resistance has submean variance in Z 2 , whereas the mean and the variance are of the same order when d ≥ 3. The case of a complete graph on n vertices has also been studied by Grimmett and Kesten [6].For a particular class of resistance distribution on the edges (see Theorem 3 in [6]), as n → ∞, the limit distribution of the random effective resistance between two specified vertices was identified as the sum of two i.i.d.random variables, each with the distribution of the effective resistance between the root and infinity in a Galton-Watson tree with a supercritical Poisson offspring distribution.
In this paper, we investigate the effective resistance and conductance in a supercritical Galton-Watson tree T rooted at ∅. Let p = (p k ) k≥0 be the offspring distribution of T, with finite mean m > 1.We assume p 0 = 0 to avoid the conditioning on survival.Formally, every vertex in T can be represented as a finite word written with positive integers.The depth |x| of a vertex x in T is the number of edges on the unique non-self-intersecting path from the root ∅ to x, which also equals the length of the word representing x.Let T n := {x ∈ T : |x| = n} denote the n-th level of T. We write ← − x for the parent vertex of x if x = ∅.For each edge e = { ← − x , x} of T, we define its depth d(e) := |x|.Let ν be the number of children of the root, whose expected value is m.For 1 ≤ i ≤ ν, the edge {∅, i} between the root ∅ and its child i has depth 1.If x and y are vertices of T, we write x y if x is on the non-self-intersecting path connecting ∅ and y.In this case, we say that y is a descendant of x.We define T n [x] := {y ∈ T n : x y} as the set of vertices at depth n that are descendants of x.
If the resistance of an edge at depth d equals λ d with a deterministic λ > 0, Lyons [8] showed that the effective resistance between the root and infinity in T is a.s.infinite if λ > m and a.s.finite if λ < m.The corresponding λ-biased random walk on T is thus recurrent if λ > m, and transient if λ < m.For the critical value λ = m, we know by a subsequent work of Lyons [9] that the network still has an infinite effective resistance between the root and infinity.More precisely, the critical λ-biased random walk is null recurrent provided (k log k)p k < ∞.
When the edges of T have random resistances, we are mainly interested in the similar case of critical exponential weighting: to each edge e at depth d(e), we assign the resistance r(e) := m d(e) ξ(e) , (1.1) where, conditionally on T, {ξ(e)} are i.i.d.copies of a nonnegative random variable ξ.We will call (T, {r(e)}) a branching random network of offspring distribution p and electric resistance ξ.For convenience, we assume that (T, {r(e)}) and ξ are independent and defined under the same probability measure P.
Let R n (resp.C n ) be the effective resistance (resp.effective conductance) between the root ∅ and the vertices at depth n in (T, {r(e)}).When T is a deterministic binary tree, Addario-Berry, Broutin and Lugosi [1] showed that as n → ∞, provided ξ is bounded away from both zero and infinity.Their arguments are based on the concentration phenomenon of C n and R n when the underlying tree is regular.The Efron-Stein inequality is the main tool to deduce the following upper bounds on the variance A sub-Gaussian tail bound is also established for R n , which gives As observed in the concluding remarks in [1], if the tree T is random, C n and R n are no longer concentrated.For any nonnegative random variable X, we set where W := lim n→∞ m −n #T n .
We write The convergence W n → W holds almost surely and in the L 2 -sense.The limit W is almost surely strictly positive, with Similarly, for each vertex x ∈ T, the random variable has the same distribution as W .Using the tree notation |x| = n to denote a vertex x at depth n, we have W = m −n |x|=n W (x) .Theorem 1.1 answers some questions mentioned at the end of [1].When the offspring number ν is not deterministic, it implies that the limit distribution of {C n } is absolutely continuous with respect to the Lebesgue measure, which is a "scaled analogue" of Question 4.1 in Lyons, Pemantle and Peres [10].For the absolute continuity of W , see for instance Theorem 10.4 in Chapter 1 of [2].
For our next result, let us define (1.4) Notice that by Theorems 22 and 23 in Dubuc [5], If p 1 m ≥ 1, by Fatou's lemma, we deduce from (1.2) and (1.5) that See also the remark at the end of Section 3.
To state a more precise asymptotic expansion for E[C n ], we define ) The constant c 0 appearing in the expansion above will be defined at the end of Section 4, but its explicit value is unknown to us.
To further describe the rate of convergence in (1.2), we write ξ x := ξ({ ← − x , x}) for every vertex x = ∅.Remark that, conditioning on the first levels of the tree T, the random variables W (x) , |x| = are i.i.d. and independent of ξ x , |x| = .Notice that and, with the same constant c 0 in Theorem 1.3, where The rest of the paper is organized as follows.In the next section, we recall Thomson's principle for the effective resistance, and we derive the recurrence relation for C n .In Section 3, we collect some estimates on the moments of C n .The convergence (1.5) and Theorem 1.3 will be shown in Section 4 by analyzing the recurrence equations on the moments of C n .Similar arguments have already been used in the proof of Theorem 5 in [1].By second moment calculations, we establish Theorems 1.1 and 1.4 in Section 5, and, by proving the uniform integrability of (n −1 R n ) n≥1 , we complete the proof of Theorem 1.2 in Section 6.Finally, in Section 7 we briefly discuss the case when we change the scaling by assigning to each edge e in T the resistance λ d(e) ξ(e) with λ > m.

Preliminaries
Consider a general network G = (V, E) with the resistances {r(e)}.For x, y ∈ V , we write x ∼ y to indicate that {x, y} belongs to E. To each edge e = {x, y}, one may associate two directed edges − → xy and − → yx.We shall denote by Let A and Z be two disjoint non-empty subsets of V : A will represent the source of the network and Z the sink.The flow θ is from A to Z with strength θ if it satisfies Kirchhoff's node law that div θ(x) = 0 for all x / ∈ A ∪ Z, and that The effective resistance between A and Z can be defined as where the infimum is taken over all flows θ from A to Z with unit strength.The infimum is always attained at what is called the unit current flow, which satisfies, in addition to the node law, Kirchhoff's cycle law.This flow-based formulation of the effective resistance is also called Thomson's principle.The effective conductance Conditionally on the branching random network (T, {r(e)}), let X be the associated random walk on the tree T. Let ω(x, y), x ∼ y denote the transition probabilities of X, and let π(x), x ∈ T denote the reversible measure.Writing the conductances c(e) = 1/r(e), we have We suppose that the random walk X starts from the vertex x at time 0 under the probability measure P x,ω .As a probabilistic interpretation, the effective conductance between the root and the level set {x ∈ T : |x| = n} satisfies ) denote the effective conductance between the vertex i and T n+1 [i].We also set η i := ξ({∅, i}) −1 , 1 ≤ i ≤ ν, which are i.i.d., independent of ν.Observe that conditioning on ν, (C n+1,i ) 1≤i≤ν are i.i.d., independent of η i , and distributed as Cn m .Using the series and parallel law of electric networks, we obtain the recurrence relation that for n ≥ 1, where for 1 2) can also be written as

Bounds on the expected conductance
Let η denote the reciprocal ξ −1 .
By concavity of the function x → xy x+y , y > 0 being fixed, as n → ∞.
Proof.Starting from (2.2), we obtain by developing the square and using the independence after conditioning on ν.Together with Lemma 3.1, it follows that ] < ∞, by developing the third power and using the independence, Thus, n for all n ≥ 1.In the following proof, we will use the uniform flow on T to give an upper bound for R n = C −1 n .Similar arguments can be found in Lemma 2.2 of Pemantle and Peres [12].Proof.We define on T the uniform flow Θ unif of unit strength (with the source {∅}) by setting According to Thomson's principle (2.1),

.1)
We write A := sup k≥1 m −k #T k , which is square integrable by L 2 -maximal inequality of Doob.It follows that Using Proposition 2.3 in [12], a variant of the strong law of large numbers for exponentially growing blocks of identically distributed random variables being independent inside each block, we have Hence, almost surely , Taking expectation and using Fatou's lemma, we obtain The proof is thus completed.
Remark.The Nash-Williams inequality (see Section 2.5 in [11]) gives the lower bound .
The integrability of W −1 is therefore a necessary condition for having

Asymptotic expansion of the expected conductance
Within this section, let the assumption E[ξ 2 + ξ −1 + ν 3 ] < ∞ be always in force.We first establish (1.5) in Theorem 1.2.Afterwards we will prove Theorem 1.3 under the stronger assumption that For every integer n ≥ 1, we write By Lemma 3.2, we have Cn with ξ and C n being independent.Then developing the power of C n+1 , we arrive at with the constants a 1 , a 2 defined as in (1.3) and (1.6).
Using the identity 1 1+x = 1 − x + x 2 1+x , we obtain Hence, we have ) Remark that Since x n ≥ c n by Lemma 3.3 and y n = O(n −2 ), we get xn x n+1 ≤ 1 + C n for some positive constant C independent of n.It follows that for any i < n/2, with another constant C > 0.
Still by Lemma 3.3, we can divide all terms in (4.1) by x n x n+1 , which leads to By induction, (4.2) implies that Using (4.3), we deduce that with the constant c 1 defined in (1.4).Consequently, and which gives the convergence (1.5).
Assuming from now on that E[ξ 3 + ξ −1 + ν 4 ] < ∞, we proceed to find higher-order asymptotic expansions for x n .Using the identity ) by Lemma 3.2.We prove in the same manner that Hence, we deduce that ) ) Dividing all terms in (4.10) by x 3 n+1 gives we get by induction that Then we have with the constant c 2 defined in (1.7).Dividing all terms in (4.9) by x 2 n+1 gives

.13)
For every n ≥ 1, define It has been shown that ε n = O(n −1 ).Putting into (4.13),we see that By (4.11) and (4.12), it follows that with the constant c 3 defined in (1.8).Moreover, in view of (4.7), we derive from Going back to (4.8), we obtain by the definition of ε n that As Using (4.7), (4.11) and (4.12), we get that with the constant Finally we have

Almost sure convergence and rate of convergence
To prove Theorems 1.1 and 1.4, let us write For every vertex x ∈ T and j ≥ 1, we also define 3), we have Using the simple equality W = m −1 ν i=1 W (i) , we deduce that , by induction,

2, and yn
x n+1 = O(n −1 ) by Lemma 3.3.Hence, we derive from the inequality Conditioning on the first k levels of the tree T, (Y Using the fact that Y n is of zero mean and uniformly bounded in L 2 , we can find a constant C > 0 such that (5.1) Meanwhile, It follows that By taking k = C log n for some constant C sufficiently large, we see that Choose a subsequence n j = j 2 .Borel-Cantelli's lemma gives that Y n j converges to 0 almost surely.The monotonicity of C n shows that for any n j ≤ n < n j+1 ,

−→
First, observe that taking the subsequence k n = 4 log m log n in (5.1) yields By Borel-Cantelli's lemma, the preceding convergence also holds in the almost sure sense.We claim that

So for any integer
Note that On the one hand, Using (4.5) and the facts that ).On the other hand, = 0, which yields (5.3).Therefore, In view of (4.16), we have and the convergence (1.10) follows immediately.

The expected resistance
When The following lemma yields the uniform integrability of ( Rn n , n ≥ 1), and completes the proof of Theorem 1.2.Lemma 6.1.Suppose that p 1 m < 1 and E[ξ r + ν 2r ] < ∞ for some r > 1.Then there exists some s > 1 such that sup Proof.As p 1 m < 1, by Theorems 22 and 23 in Dubuc [5], there is some α > 1 such that In fact, we may take any α ∈ (1, − log p 1 log m ), with the convention that ] < ∞, according to Bingham and Doney [4].
Recall that the martingale Fix an arbitrary s ∈ (1, r ∧ α).By convexity, we deduce from (3.1) that It follows that For any a > 0, we claim that there exits some positive constant C = C(a, s) > 0 such that for any k ≥ 1, m ks E (#T k ) s e −a(#T k −1) ≤ C. (6.4) Indeed, by discussing whether #T k ≥ k 2 or not, we have y s e −ay + m ks k 2s P #T k < k 2 .
Recall that E[W 2s ] < ∞ because s < r.Going back to the right-hand side of (6.3), we split the integral ∞ 0 into two parts 1 0 and ∞ 1 .For the part ∞ 1 we apply (6.4) with a = φ(1), and for the part 1 0 we dominate E[W 2s e −uW ] by E[W 2s ], to arrive at Using again (6.1) we get that sup k≥1 I k < ∞, yielding (6.2) and completing the proof.

General exponential weighting
Given the Galton-Watson tree T and λ > 0, one can do the λ-exponential weighting of resistance by assigning the resistance λ d(e) ξ(e) to each edge e at depth d(e).As before, conditionally on T, {ξ(e)} are i.i.d.positive random variables.In this random electric network, let C n (λ) denote the effective conductance between the root and the vertices at depth n.Instead of (2.3), the recurrence equation now reads as where for 1 ≤ i ≤ ν, C (i)  n (λ) are i.i.d.copies of C n (λ), independent of (ξ i ) 1≤i≤ν .Theorem 7.1.Fix λ > m.Assuming that E[ξ + ξ −1 + ν 2 ] < ∞, we have exists and is strictly positive.
It is easy to see that the limit of the rescaled expected conductance (7.1) is strictly smaller than E[ξ −1 ].However, we are unable to compute it explicitly.
Basically the proof of Theorem 7.1 goes along the same lines as Theorem 1.1 and that of (1.5), except a few minor modifications.We leave the details to the reader.

By ( 4 . 3 )
, the almost sure convergence of Y n readily follows.Together with (4.6), Theorem 1.1 implies that n C n a.s.