On large deviations for the cover time of two-dimensional torus

Let $\mathcal{T}_n$ be the cover time of two-dimensional discrete torus $\mathbb{Z}^2_n=\mathbb{Z}^2/n\mathbb{Z}^2$. We prove that $\mathbb{P}[\mathcal{T}_n\leq \frac{4}{\pi}\gamma n^2\ln^2 n]=\exp(-n^{2(1-\sqrt{\gamma})+o(1)})$ for $\gamma\in (0,1)$. One of the main methods used in the proofs is the decoupling of the walker's trace into independent excursions by means of soft local times.


Introduction and results
Let (X t , t = 1, 2, 3, . ..) be a discrete-time simple random walk on the twodimensional discrete torus Z 2 n = Z 2 /nZ 2 .Define the entrance time to the site x ∈ Z 2 n by T n (x) = min{t ≥ 0 : and the cover time of the torus by that is, T n is the first instant of time when all the sites of the torus were already visited by the walk.The analysis of cover time by the planar random walk was suggested in [17] under the picturesque name of "white screen problem", and was soon after popularized in the probabilistic community [1,Chapter 7].We refer to [5] for a substantial survey on cover times, and to [16] for a short account with a focus on exceptional points.Besides being an appealing fundamental question, the study of cover time is of primer interest for performance evaluation of broadcast procedures in random networks, see e.g.[11].
Not only natural, the two-dimensional model is also more difficult than its higher-dimensional counterparts.This is because dimension two is critical for the walk, resulting in strong correlations.To illustrate the dimension-based comparison, observe that very fine results are available for d ≥ 3, see e.g.[2] and references therein, and also [10] where a closely related continuous problem was studied.In contrast, in two dimensions the first-order asymptotics of the cover time was completed only recently, after a series of intermediate steps over a decade of efforts.In [6] it was proved that More rough results, without the precise constant, can be obtained using the Matthews' method [14].The result (1.3) was then refined in [8]; in the same paper it was suggested that T n /2n 2 should be around 2/π ln n − c ln ln n for a positive constant c (observe that (1.3) means that T n /2n 2 = 2/π + o(1) ln n).This can be seen as a step towards the conjecture of [4] that T n /n 2 should be tight around its median and nondegenerate.Such fine properties should be related to the fine structure of late points of the walk, i.e., the sites that get covered only "shortly" before T n .In spite of a very significant progress on this question achieved in [7], much remains to be discovered.Now, we formulate our result on the deviations from below for the cover time: Theorem 1.1.Assume that γ ∈ (0, 1).Then, for all ε > 0 we have for all large enough n.
It should be mentioned that in [3] it was proved that it is exponentially unlikely to cover any bounded degree graph in linear (with respect to the number of vertices) number of steps.In this paper, however, we are concerned with times which differ from the cover time only by a constant factor, and so we obtain only stretched exponential decay.
Remark 1.2.In fact, in Section 3.1 we prove a bit more than the upper bound in (1.4).Namely, assume that γ ∈ (0, 1), fix an arbitrary α ∈ ( √ γ, 1) and tile the torus Z 2 n with boxes of size n α .Then there exist c = c(α, γ) > 0, c ′ = c ′ (α, γ) > 0, such that, at the moment 4  π γn 2 ln 2 n, there are at least cn 2(1−α) boxes which are not completely covered, with probability at least For completeness, we also include the result on the deviations from the other side: for all large enough n.
However, it should be noted that the proof of Theorem 1.3 is not difficult once one has (1.3), although, to the best of our knowledge, it did not appear in the literature explicitly in this form.
To see how the proof of Theorem 1.3 can be obtained, observe first that we have for all β > 0, ε > 0, all large enough n and all x ∈ Z 2 n , max The estimate (1.6) is Lemma 3.3 of [7]; in fact, it is straightforward to modify the proof of the same lemma to obtain (1.7).Now, the second inequality in (1.5) immediately follows from (1.6) and the union bound.As for the first inequality, the strategy for achieving this lower bound can be described in the following way: let the random walk evolve freely almost up to the expected cover time so that, with good probability there are still uncovered sites, and then choose any particular uncovered site and make the walk avoid it till the end.More precisely, observe that, by (1.3), for any fixed δ > 0 it holds that for all n large enough; that is, at time 4 π (1 − δ)n 2 ln 2 n there is at least one uncovered site with probability at least 1  2 .An application of (1.7) with β = 2(γ − 1 + δ) concludes the proof of Theorem 1.3.
One can informally interpret (1.6)-(1.7) in the following way: hitting time of a fixed state has approximately exponential distribution with mean 2 π n 2 ln n.First, the convergence in (1.3) agrees with the intuitive understanding that "hitting times of different sites should be roughly independent", since the maximum of n 2 i.i.d.exponential random variables with mean 2 π n 2 ln n is concentrated around 4  π n 2 ln 2 n.Moreover, the probability for the maximum of such r.v.'s to be larger by a factor γ > 1 than this value is n −2(γ−1)+o (1) .It is interesting to observe that, while Theorem 1.3 still agrees with this intuition, Theorem 1.1 does not.Indeed, the probability that the maximum of n 2 i.i.d.exponential random variables with mean 2 π n 2 ln n is at most 4 π γn 2 ln 2 n (where γ ∈ (0, 1)) is of order (1 − n 2γ ) n 2 ≃ exp(−n 2(1−γ) ), which is not the actual order of magnitude obtained in Theorem 1.1.Thus, the behavior of the lower tails of the cover time reveals the fine dependence between hitting times of the different points on the torus.
To prove the upper bound in (1.4), we use the method of soft local times initially developed in [15], where it was used to obtain strong decoupling inequalities for the traces left by random interlacements on disjoint sets.This approach allows to simulate an adapted process on a general space Σ using a realization of a Poisson point process on Σ × R + .Naturally, one can use the same realization of the Poisson process to simulate several different processes on Σ, thus giving rise to a coupling of these processes.We do this to compare the excursions of the random walk at different regions with the independent excursions, that is, in some sense, we decouple the traces of the random walk in different places, which of course makes things simpler.
Let us comment also on the large deviations for the cover time of the torus in dimension d ≥ 3.This question was studied in [10] in the continuous setting, i.e., for the Brownian motion.Among other results, in [10] the many-dimensional counterparts of Theorems 1.1 (only the upper bound, by exp(−n d(1−γ)+o (1) )) and (1.3) were obtained.We expect no substantial difficulties in obtaining the same results for the random walk using the same methods as in the present paper, except for the lower bound for the deviation probability from below, since the approach of Section 3.2 fails in higher dimensions.
Notational convention: in the case when the starting point of the random walk is fixed, we indicate that in the subscript; otherwise, the initial distribution of the random walk is considered to be uniform.The positive constants (not depending on n but possibly depending on the quantities, such as γ in Theorem 1.1, which are considered to be fixed) are denoted by c, c ′ , c 1 , c 3 , c 4 etc.Also, it is convenient to view the random walks on the torus, simultaneously for all torus sizes n, as the random walk on the full lattice observed modulo nZ 2 .

Soft local times
In this section we describe the method of soft local times [15], which is the key to the upper bound in (1.4).
First, we define the entrance time to a set A ⊂ Z 2 n by We write x ∼ y if x and y are neighbors in the graph Z 2 n .For A ⊂ Z 2 n let us define the (inner) boundary of A by ∂A = {x ∈ A : there exists y / ∈ A such that x ∼ y}.
Next, for A ⊂ Z 2 n we define the entrance law to A: Let us now describe the method of soft local times, which allows us to compare excursions of the random walk with independent excursions.Let and assume that ∂A ′ = k 0 j=1 ∂A ′ j , which implies also that ∂A = k 0 j=1 ∂A j .Now, suppose that we are only interested in the trace left by the random walk on the set A. Then, (apart from the initial piece of the trajectory until hitting ∂A ′ for the first time) it is enough to know what are the excursions of the random walk between the boundaries of A and A ′ .To define these excursions, consider the following sequence of stopping times: We denote by Σ j the space of excursions between ∂A j and ∂A ′ j ; i.e., an element Z of this space is a finite nearest-neighbor trajectory beginning at a site of ∂A j and ending on its first visit to ∂A ′ j .Denote also Σ = k 0 j=1 Σ j .The method of soft local times, as presented in [15], provides a way of constructing the excursions between ∂A and ∂A ′ of the walk X using a Poisson point process on Σ × R + .To keep the presentation more clear and visual, we use another (in this case, equivalent) way of describing this approach, through a marked Poisson process on ∂A × R + .
Denote by Z i = (X S i , . . ., X D i ) the ith excursion of X between ∂A and ∂A ′ .According to Section 4 of [15], one can simulate the sequence of excursions (Z i , i = 1, 2, 3, . ..) in the following way, see Figure 1: • Consider a marked Poisson point process of rate 1 (with respect to (counting measure on ∂A)×(Lebesgue measure on R + )) on ∂A × R + , with independent marks.
• These marks are the excursions of the simple random walk starting at the corresponding site of ∂A and stopped at the first visit to ∂A ′ .
• At time D 0 take ξ 0 > 0 such that there is exactly one point of the Poisson process on the graph of ξ 0 H A (x 0 , •) and nothing below this graph, where x 0 = X D 0 .
• The mark of this point is our first excursion Z 1 .
Formally, on each ray {y} × R + (where y ∈ ∂A) take an independent Poisson point process of rate 1. Together, these one-dimensional processes can be seen as a random Radon measure on the space ∂A × R + , where Θ is a countable index set.The marks (Ψ θ , θ ∈ Θ) are independent excursions of the simple random walk, starting at z θ and stopped at the first visit to ∂A ′ .Then (cf.Propositions 4.1 and 4.3 of [15]) define and The construction of the excursions, the points are represented with crosses, the marks are pictured above them.Observe that we take the initial excursion (up to time D 0 ) out of consideration (even if X 0 ∈ A).
Denote by (z 1 , u 1 ) the a.s.unique pair in {(z θ , u θ )} θ∈Θ with ξ 1 G 1 (z 1 ) = u 1 , and let Ψ 1 be the corresponding excursion.Then, it holds that Ψ 1 is distributed as Z 1 and the point process We can proceed iteratively to define ξ n , G n and (z n , u n ) as follows and and let Ψ m be the corresponding excursion.Then, one can show that ξ 1 , ξ 2 , ξ 3 , . . .are i.i.d.random variables, exponentially distributed with parameter 1.Also, it holds that the sequence of excursions (Ψ 1 , . . ., Ψ m ) equals in law to (Z 1 , . . ., Z m ), and these are independent from ξ 1 , . . ., ξ m .Also, θ∈Θ: is distributed as η and independent of the above.The function G m is called the soft local time of the (excursion) process, the reason for this name is explained in Section 1.3 of [15].According to the above definitions, the soft local time in y up to mth excursion is expressed as We need to introduce some further notations.Let us write x ∈ Z when the excursion Z passes through x ∈ A. Consider any probability measure Hj (•) on ∂A j .Let Z(j) 1 , Z(j) 2 , Z(j) 3 , . . .∈ Σ j be a sequence of independent elements of the excursion space, chosen according to the following procedure: take a starting point x ∈ ∂A j with probability Hj (x), and then run the simple random walk until it hits ∂A ′ j .Similarly to the previous construction of the excursions of the random walk X, we can simulate the sequence Z(j) 1 , Z(j) 2 , . . . of independent excursions in the same way, and its soft local time in y up to time m equals G(j) m (y) = Hj (y) 1 H1 (ξ 2 ) H1 (ξ 3 ) H1 Figure 2: The construction of the i.i.d.excursions between ∂A j and ∂A ′ j .It is important to observe that the points of the Poisson process appear in different order in this construction when compared to the corresponding excursions on Figure 1 (note that we use the same realization of the Poisson process).
where ξ 3 , . . . is another sequence of Exp(1) i.i.d.random variables.For the construction of this sequence of independent excursions, we use the same realization of the marked Poisson point process, thus creating a coupling of the sequence of the excursions of X with k 0 collections of i.i.d.excursions (see Figure 2).At this point we have to observe that the sequence (ξ i , i ≥ 1) is not independent from the collection of sequences (ξ (j) i , i ≥ 1, j = 1, . . ., k 0 ), although this fact does not result in any major complications.
Let us denote σ i the ith excursion between ∂A j and ∂A ′ j .We also set ψ j,t = max{i : S σ (j) i ≤ t}, and then denote by ζ j (t) = σ (j) ψ j,t the number of excursions between ∂A j and ∂A ′ j up to time t (possibly including the last incomplete one), and by ζ(t) = k 0 j=1 ζ j (t) the total number of excursions up to time t.
For j = 1, . . ., k 0 and b > a > 0 define the random variables It should be observed that the analysis of the soft local times is considerably simpler in this paper than in [15].This is because here the (conditional) entrance measures to A j are typically very close to each other (as in (2.5) below).That permits us to make sure statements about the comparison of the soft local times for different processes in case when the realization of the Poisson process in ∂A j × R + is sufficiently well behaved, as e.g. in (2.6) below.
To prove (ii), fix k ≥ 1 and let y (k) (with the convention 0/0 = +∞).We then argue that for all k ≥ 1 we always have G k (y) Hj 0 (y) for all y ∈ ∂A j 0 .
(2.7) Indeed, by (2.5) we have (because otherwise, recall (2.6), we would have more than (1 − v)m points of the Poisson process below the graph of G k ), and so, by (2.7), G k (y) Hj 0 (y) ≤ (1+v)m for all y ∈ ∂A j 0 (see Figure 3), which implies that On the proof of Lemma 2.1.For simplicity, here we assumed that Hj 0 ≡ h for a positive constant h.
3 Proof of Theorem 1.1 The proof is divided into two parts.First, in Section 3.1 we use the method of soft local times to prove the second inequality in (1.4).Then, in order to prove the first inequality in (1.4) we present a particular strategy for the walk, that assures that the torus will be covered with a not-too-small probability by time 4 π γn 2 ln 2 n.

Upper bound
Note that for any fixed x ∈ Z 2 n there is a natural bijection of Z 2 n and [1, n] 2 ⊂ Z 2 in such a way that x is mapped to ⌈ n 2 ⌉, ⌈ n 2 ⌉ ∈ Z 2 .Then, for y ∈ Z 2 n define y −x to be the Euclidean distance between ⌈ n 2 ⌉, ⌈ n 2 ⌉ and the image of y, and we define also y−x 1 and y−x ∞ to be the ℓ 1 and the ℓ ∞ distances correspondingly.For r < n 2 we then define the discrete ball B(x, r) ∈ Z 2 n as the set of sites which are mapped by this bijection to the Euclidean ball of radius r centered in ⌈ n 2 ⌉, ⌈ n 2 ⌉ .Define excursions between the balls B(0, r) and B(0, R) as in Section 2 (with A 1 = B(0, r), A ′ 1 = B(0, R), k 0 = 1).Now, we need to control the time it takes to complete the jth excursion (see Lemma 3.2 of [7]): Next, let us obtain the following consequence of Lemma 2.1: Lemma 3.2.Let 0 < r n < R n < n/3 be such that r n ≥ n ln h n for some h > 0. Then for any ϕ ∈ (0, 1), there exists δ > 0 such that if H is a probability measure on ∂B(0, r n ) with sup z∈∂B(0,Rn) y∈∂B(0,rn) then, as n → ∞, where Z1 , Z2 , Z3 , . . .are i.i.d.excursions between ∂B(0, r n ) and ∂B(0, R n ) with entrance measure H, and k 0 (n) = 2ϕ ln 2 Rn ln Rn/rn .Proof.Lemma 2.1 implies that one can choose a small enough δ > 0 in such a way that one may couple the independent excursions with the excursion process Z 1 , Z 2 , Z 3 , . . . of the random walk X on Z 2 n so that with probability converging to 1 with n, where δ ′ > 0 is such that (1 + δ ′ )ϕ < 1.Now, choose b such that (1 + δ ′ )ϕ < b < 1 and observe that Theorem 1.2 of [7] implies that a fixed ball with radius at least n ln h n will not be completely covered up to time 4 π bn 2 ln 2 n with probability converging to 1. Together with Lemma 3.1 this implies that as n → ∞, and this completes the proof of (3.3).
We continue the proof of the upper bound in Theorem 1.1.Fix an arbitrary α ∈ ( √ γ, 1), and let us denote Let us tile the (continuous) torus R 2 n := R 2 /nZ 2 with k n squares with side s n .Let us enumerate the squares in some way, and let x ′ 1 , . . ., x ′ kn be the sites at the centers of these squares.We then consider some isometric immersion of the torus Z 2 n into R 2 n , and denote by x 1 , . . ., x kn ∈ Z 2 n the (discrete) sites closest to x ′ 1 , . . ., x ′ kn ∈ R 2 n .Fix a small enough b ∈ (0, 1/3) (to be specified later), and define A j = B(x j , bs n ), A ′ j = B(x j , s n /3); also, as before, set A = kn j=1 A j and A ′ = kn j=1 A ′ j .We construct the excursions of the random walk X between ∂A j and ∂A ′ j , j = 1, . . ., k n , as in Section 2.Then, fix any site z 0 / ∈ A ′ and define Hj We need to show that the entrance measures to A j , j = 1, . . ., k n , are "almost equal to Hj " on the boundary of each ball, if the parameter b are suitably chosen: Lemma 3.3.For any ε > 0 we can choose b ∈ (0, 1/3) in such a way that for all y ∈ ∂A ′ , x ∈ ∂A j , j = 1, . . ., k n , we have Proof.This fact easily follows e.g. from Lemma 2.2 of [7]: one can use conditioning on the position of the walk upon hitting B(x j , R) for a suitably chosen R, and then use (2.11) of [7].
As in Section 2, we denote by ζ j be the number of excursions of X between ∂A j and ∂A ′ j up to time 4 π γn 2 ln 2 n, and let Lemma 3.4.There is c > 0 such that Proof.It is tempting to write that the total number of excursions should have the same law as the number of excursions between B(0, bs n ) and B(0, s n /3) in Z 2 sn (if so, an application of Lemma 3.1 would do the job).In the continuous setting this would work well, but, unfortunately, s n is not necessarily integer which makes the above-mentioned equality in law formally false.
So, we proceed in the following way.First, by CLT one can obtain that there exists n .This implies that Then, to find an upper bound on max x E x T n (A), we can first approximate the random walk with the Brownian motion by means of the multidimensional version (Theorem 1 of [9]) of the KMT strong approximation theorem [12], and then use Lemma 2.1 from [6] together with (3.6) to obtain the following fact: for any δ ∈ (0, γ ′ − γ) one can choose small enough b in such a way that The rest of the proof goes exactly in the same way as the proof of Lemma 3.2 (the relation (3.19) there) in [7].
Next, fix γ ′′ in such a way that γ ′ < γ ′′ < α 2 .If we had at least γ ′ γ ′′ k n balls among (A 1 , . . ., A kn ) with the corresponding number of excursions more than 2γ ′′ ln 2 n | ln(3b)| in each of them, then the total number of excursions ζ would be strictly greater than 2γ ′ kn ln 2 n | ln(3b)| , so the event Λ 1 would not occur.Thus, on Λ 1 we have that i.e., on the event Λ 1 the number of places where we have not too many excursions is of order k n .Now, choose v > 0 in such a way that (1 + 2v)γ ′ α −2 < 1, and assume that b is sufficiently small so that the hypothesis of Lemma 2.1 holds on Z 2 sn for r = bs n , R = s n /3 (Lemma 3.3 assures that we can choose such b).Denote and let Z(j) 1 , Z(j) 2 , Z(j) 3 , . . .be the independent excursions between A j and A ′ j obtained using the coupling of Section 2. Define the events Λ (j) 2 = U ℓ 1 j , where U ℓ 1 j is the event in (2.6), and Λ (j) 3 = there exists y ∈ A j such that y / ∈ Z(j) m for all m ≤ ℓ 2 .
Observe that, by Lemmas 2.1 and 3.2, we have for any j = 1, . . ., k n .Next, choose γ ∈ γ ′ γ ′′ , 1 , and define the event observe that the indicators in the above sum are i.i.d.random variables.By (3.9), for all large enough n it holds that (recall that k n = n 2(1−α) (1+o(1))) But, taking (3.8) into account, we see that on Λ 1 ∩ Λ 4 at time 4 π γn 2 ln 2 n we have at least γ − γ ′ γ ′′ k n balls among A 1 , . . ., A kn which are not completely covered (observe that we have to exclude at most one ball that may have been crossed by the initial excursion (X 0 , . . ., X D 0 ); this is why we put "+1" in (3.10)).This means that T n > 4 π γn 2 ln 2 n on Λ 1 ∩ Λ 4 , so the second inequality in (1.4) follows from (3.11) and Lemma 3.4.

Lower bound
In this section, we prove the lower bound of (1.4).For this, we propose a simple strategy for the random walk to cover Z 2 n before time 4 π γn 2 ln 2 n.We start with an informal discussion to outline the main ideas.We first divide the torus Z we want the random walk to cover the torus Z 2 n before time t 0 = 4 π γn 2 ln 2 n, the natural strategy is to attempt to cover each box in time at most For this, we divide the time interval [0, t 0 ] into time intervals [(j − 1)r n , jr n ), for j ∈ {1, . . ., n 2(1−α) }, and during each of them we force the random walk to spend most of the time in the box B j .In order to do this, we control the size of excursions of the random walk outside B j and show that with probability greater than exp(−c ln 10 n) the time spent by the random walk in B j is almost r n .Then, we show that the trace left by the random walk on B j is not very different from the trace left on B j by a random walk in a torus a bit larger than B j , with a not-too-small probability (we invite the reader to look at Figure 5 to get an idea about how this is done).Since α < √ γ, this allows us to apply (1.3) to conclude that, conditionally on the events mentioned above, with probability greater than a constant c ′ > 0 the random walk covers the box B j during the time interval [(j − 1)r n , jr n ).Finally, choosing α close enough to √ γ and applying the Markov property, we obtain the total cost for this strategy that is at least (c ′ exp(−c ln Now, let us start the proof.Let α ∈ (0, √ γ) and N = n ⌊n α ⌋ .We divide the torus Z 2 n into N 2 boxes of size ⌊n α ⌋ (i.e., each box contains ⌊n α ⌋ 2 sites).The "lower left" box is called B 1 (in this section the torus Z 2 n is identified with [0, n) 2 ⊂ Z 2 ) and the other boxes are positioned and enumerated following the arrows showed in Figure 4 up to the box B N 2 .Observe that if n is not divisible by ⌊n α ⌋, then the boxes B jN , B (j−1)N +1 on Figure 4 have some area in common for j ∈ {1, . . ., N}.The same is true for the boxes B j , B N 2 −(j−1) for j ∈ {1, . . ., N}.
Let η ∈ 0, min{1, 1 2 ( √ γ α − 1)} and for all i ∈ {1, . . ., N 2 }, introduce the following sets ℓn where ℓ n = 2⌊ηn α ⌋ + ⌊n α ⌋ and fix a box B of size ⌊n α ⌋ "centered" in it.Let B = {x ∈ Z 2 ℓn : for all y ∈ B, y − x ∞ ≥ ⌊ηn α ⌋} be the "boundary" of the torus Z 2 ℓn .For all i ∈ N, we consider the sequence Y (i) (independent of X) of i.i.d.random elements, where for each i ≥ 1, where H B (x, •) is the entrance law in B for the simple random walk on the torus Z 2 ℓn starting from x, similarly to (2.1).Using the natural identification of the boxes B ′ i with Z 2 ℓn and the boxes B i with B, each random element Y (i) will be viewed as a set of random variables indexed by ∂B ′ i and j ≥ 1 and taking values in B i .
Set V 0 = 0.For i ∈ {1, . . ., N 2 }, we define inductively (see Figure 5): (observe that for i = 1 the value of V i−1 = V 0 is set to be equal to 0, and, for the next steps, see (3.12) below) and for all j ≥ 1, define Let δ > 0 and recall that r n = 4 π γn 2α ln 2 n.We also define and Finally, we define V i = inf t ≥ β i : X t = w i (3.12) where w i is the lower left corner point of the box B i+1 .By transitivity of the simple random walk on the torus Z 2 n we have that for all x ∈ Z 2 n .So, in the rest of the proof we assume that x = 0. Define S (i) as the trace left by the excursions of the random walk X during the time intervals [σ (i) j , τ (i) j ], 0 ≤ j < J i and [σ (i) J i , β i ].Define the events M i , for i ∈ {1, . . ., N 2 } as Figure 5: The strategy for covering the box B i .We let the walk evolve freely until it hits the boundary of B ′ i .Then, we force the walk to go rapidly to a random site of ∂B i (this corresponds to the gray parts of the trajectory).This random site is chosen according to the entrance law to B i as if we had the torus Z 2 ℓn instead of the box B ′ i .This allows us to dominate the trace of the random walk X on B ⊂ Z 2 ℓn by the trace of the random walk X on B i .
, x ∈ B, j ≥ 1 , and the Y (i) j,x are independent random variables such that P[Y (i) j,x = y] = H B (x, y),