Two Particles' Repelling Random Walks on the Complete Graph

J o u r n a l o f P r o b a b i l i t y Electron. Abstract We consider two particles' repelling random walks on complete graphs. In this model, each particle has higher probability to visit the vertices which have been seldom visited by the other one. By a dynamical approach we prove that the two particles' occupation measure asymptotically has small joint support almost surely if the repulsion is strong enough.


Introduction and statement of result
In this paper, we consider a model of multi-particle vertex repelling random walks, which is analogous to the well-studied reinforced random walks (RRW).See [13] for a general reference to RRW.To the best knowledge of the author, this paper is one of the first papers [7] investigating multi-particle interacting random walks.Our model was proposed by Itai Benjamini around the year 2010 and can be generalized to any graph.Now we define the model of two particles' repelling random walks on a complete graph.Denote the two particles by X and Y , and let G = (V, E) be a complete graph with V = {1, . . ., d}.Let X k , Y k be X, Y 's locations at time k on V , and N (X, v, n), N (Y, v, n) be the number of X, Y 's visits to vertex v by time n.Assume that N (X, v, 0) = N (Y, v, 0) = 1 for any v ∈ V .Let be X and Y 's empirical occupation measure on V by time n.Let F n (n ∈ N) be the natural filtration generated by {X k , 0 ≤ k ≤ n} and {Y k , 0 ≤ k ≤ n}.Then we define the random walks (X n , Y n ) by and where δ, α are some fixed positive numbers and 1 {•} is the indicator function.Notice that by the definition of (1.2) and (1.3), the random walks are lazy random walks, i.e. the particles could stay at their current locations.
Here we want to mention that when min i,j∈V {x i (n), y j (n)} > δ, (1.2) and (1.3) are equivalent to the following formulas which can be viewed as a multi-particle analogue of the classical RRW with nonlinear reinforcement.In the definition of (1.2) and (1.3), we are not able to work with δ = 0 due to a technical difficulty of our proof.See Problem 4.2.
Then we can state our main result.
Our result says that for a fixed complete graph, when the repulsion is strong enough, in the definition of our model we can push δ down to zero to relax its restriction on the occupation measures x(n) and y(n), so that the joint support of the particles' occupation measures can be made arbitrarily small.Our result is analogous to the localization results [1,3,9,10,14,16] in the RRW models.The organization of this paper is as follows: In Section 2, we will do some preparation work for the proof of Theorem 1.1.More specifically, we will introduce a notion of stochastic approximation algorithm, describe the dynamical approach and then apply them to z(n), finally conclude that the limit set of z(n) is contained in the chain recurrent set of a semiflow induced by an ordinary differential equation (ODE).In Section 3, we will prove Theorem 1.1.In Section 4, we will propose some open problems.
2 Some preparations to prove the main result

Stochastic approximation algorithm and dynamical approach
A stochastic approximation algorithm is a discrete time stochastic process whose form can be written as where H : R m × R k → R m is a measurable function that characterizes the algorithm, {z(n)} n≥0 ⊂ R m is the sequence of parameters to be recursively updated, {ξ(n)} n≥0 ⊂ EJP 19 (2014), paper 113.
Page 2/17 ejp.ejpecp.orgR k is a sequence of random variables defined on some probability space, and {γ n } n≥0 is a sequence of "small" nonnegative numbers.Such processes were first introduced in the early 50s in the works of Robbins and Monro [15] and Kiefer and Wolfowitz [6].
Observe that z(n) in (1.4) is a stochastic approximation algorithm.Indeed, from (1.1) Similarly, a difference equation for y i (n) can be derived.Then z(n) satisfies (2.1) with (2.4) The dynamical approach is a method used to analyze stochastic approximation algorithms, introduced by Ljung [11] and Kushner and Clark [8].The idea is to decouple the stochastic approximation algorithm into its mean part and the other so-called "noise" part, and then study the asymptotic behavior of the algorithm in terms of the mean component's behavior.This method has been widely studied and inspired many works, such as the book by Kushner and Clark [8], numerous articles by Kushner, and more recently the book by Benveniste, Metivier, and Priouret [4].
In the above perspective, our stochastic approximation algorithm z(n) can be written as Before moving on, we need to introduce some notations.Notation 2.1.
, ∀x ∈ ∆. (2.6) Observe that, by (1.2), (1.3) and (2.3), Thus, defining {u n } n≥0 ⊂ R 2d by and F = (F 1 , . . ., F 2d ) to be a vector field in D with (2.9) The above expression is a particular case of a class of stochastic approximation algorithms studied by Benaïm in [2], on which he related the behavior of the algorithm to a weak notion of recurrence for the ODE: that of chain-recurrence.His theorem asserts that, under some appropriate conditions, the accumulation points of {z(n)} n≥0 are contained in the chain-recurrent set of the semiflow generated by the ODE.
In the remaining of this section, we introduce the necessary definitions for semiflows, state Benaïm's theorem, and conclude the section by proving that our model satisfies the required conditions of this theorem.
In particular, for every continuous vector field F : R m → R m with unique integral curves, we can associate a semiflow on R m by the equation Note that our definition of "invariant" is equivalent to the definition of "positively invariant" in some literature.
Two particles' repelling random walks Definition 2.4 (Equilibrium point).A point x ∈ Γ is called an equilibrium if Φ t (x) = x for all t ≥ 0. The equilibrium set of Φ is the set of all equilibrium points.
When Φ is induced by a vector field F , the equilibrium set coincides with the set of points on which F vanishes.
x is said to be chain-recurrent if it is (ρ, T )-recurrent for any ρ, T > 0.
Let CR (Φ) be the set of chain-recurrent points associated with Φ.Note that CR (Φ) is closed and invariant.
We denote the limit set of a discrete sequence {x(n)} n≥0 ⊂ Γ by L ({x(n)} n≥0 ).The sets describing the asymptotic behavior of the orbits of Φ are the omega limit sets.Definition 2.6 (Omega limit set).The omega limit set of w ∈ Γ, denoted by ω(w), is the set of x ∈ Γ such that lim k→∞ Φ t k (w) = x for some sequence t k > 0 with lim k→∞ t k = ∞.
If Γ is compact, ω(w) is a nonempty, compact, connected and invariant set.

A limit set theorem
The reason we can characterize the limit set of a random process via the chainrecurrent set of a deterministic semiflow is due to Theorem 1.2 of [2] which, to our purposes, is stated as Theorem 2.8.Let F : R m → R m be a continuous vector field with unique integral curves, and let {z(n)} n≥0 be a solution to the recursion where {γ n } n≥0 is a decreasing gain sequence 1 and {u n } n≥0 ⊂ R m .Assume that (i) {z(n)} n≥0 is bounded, and Then L({z(n)} n≥0 ) is a connected set chain-recurrent for the semiflow induced by F .
the sequence {M n } n≥0 converges to a finite random vector in R 2d almost surely (see e.g. Theorem 5.4.9 of [5]).In particular, it is a Cauchy sequence and so condition (ii) holds almost surely.Now, in view of Theorem 2.8, we will investigate the chain-recurrent set of the semiflow generated by the following ODE where f is a function as follows ( We can rewrite (2.10) in vector form where Ξ(t) = (u(t), v(t)) ∈ D.
Before moving to the proof of Theorem 1.1, we will prove a simple fact regarding (2.10).
Proof.Suppose (u, v) ∈ ∂D.Without loss of generality, we can assume that there exists some i ∈ V such that u i = 0. Then by (2.10), we have Hence, F (u, v) points inward whenever (u, v) belongs to the boundary of D. Thus any forward trajectory based in D remains in D. This completes the proof.

Proof of the main result
By Theorem 2.8, the limit set of {z(n)} n≥0 is contained in the chain recurrent set, and so the first step to prove Theorem 1.1 is to characterize chain-recurrent set for our specific semiflow induced by (2.10).Recall U defined in Notation 2.1.We will conclude the proof of Theorem 1.1 by showing that {z(n)} n≥0 has probability zero to converge to the isolated unstable equilibrium (U, U ).
Notice that the right hand side of (3.2) depends on t only through dependence on u i (t) and v i (t).We have the following lemma about (3.2), which confirms that L(u, v) is a Lyapunov function for a large subset of the domain D according to Definition 2.7.
with equality if and only if (u, v) = (U, U ).
Lemma 3.2.When α > d − 2, U is a local minimum of the function g : Observe that G(w 1 , . . ., w d ) is a homogeneous function, and it has the same value as g(u 1 , . . ., u d ) whenever Without loss of generality, we can assume w d = min i∈V w i , then Let W = (w, . . ., w) (w > 0), and we refer to W as the diagonal.So to prove the lemma, it suffices to prove that W is a local minimum of G(w 1 , . . ., w d ).
First, by direct calculation, we can check that G(w 1 , . . ., w d ) has zero gradient at W , i.e. ∇G| W = 0. Hence, W is a critical point of G(w 1 , . . ., w d ).
Further, we will prove that G(w 1 , . . ., w d ) is convex along all the other directions except the diagonal.We calculate H, the Hessian matrix of G(w 1 , . . ., w d ) at W : , and hence H = (2/(d 2 w 2 ))P .We want to calculate the eigenvalues of P first.Let and hence P = (α + 1)I + Q, where I is the identity matrix.By direct calculation, we can have the eigenvalues of Q λ .
Then shifting Q's eigenvalues by α + 1, we get the eigenvalues of P λ Finally, we derive the eigenvalues of H λ Thus when α > d−2, one of H's eigenvalues is zero and all the others are strictly positive.
It is easy to check that the sum of each row of H is zero, which means That is, the diagonal is an eigenvector associated with H's zero eigenvalue.This proves that G(w 1 , . . ., w d ) is convex along all the other directions except the diagonal, and hence the diagonal is its local minimum.
Keeping the notations of Lemma 3.2, we have the following lemma.
Proof.It is equivalent to prove that for any u = (u 1 , . . ., u d ) ∈ • ∆ the following holds 2 min with equality if and only if u = U .We will divide the proof of (3.5) into two cases: (1) u is in a neighborhood of U ; (2) u is bounded away from U .Equivalently, for some fixed 0 < κ < 1, u satisfies min i∈V u i < κ/d.
To prove case (2), first we use the minimum coordinates of u to bound the right hand side of (3.5) from above.More precisely, for fixed d and α, we will prove that for any u ∈ • ∆, the following holds Without loss of generality, we can assume u d = min i∈V u i .Then if letting a i = min i∈V u i /u i = u d /u i ∈ (0, 1], (3.6) is equivalent to the following inequality with a i ∈ (0, 1] (i = 1, . . ., d − 1) 1 To prove (3.7), observe that by Hölder's inequality, thus proving (3.7) and also (3.6).Notice that when α ≥ log d/ log(2 − κ) − 1, for any u d ∈ (0, κ/d) the following inequality holds Then let When α > α 0 (d), the above two cases combined imply that (3.5) holds for any u ∈ where the last step is obtained by repeating the argument in the previous steps.Thus we have proved (3.11).
Observe that the following elementary inequality holds 2 min Then it follows from (3.12) and (3.13) that 2 min (1) min i∈V f (u i ) > δ and min i∈V f (v i ) > δ; (2) min i∈V f (u i ) = δ and min i∈V f (v i ) > 2δ, or the symmetric case min i∈V f (v i ) = δ and min i∈V f (u i ) > 2δ; (3) min i∈V f (u i ) = δ and min i∈V f (v i ) ≤ 2δ, or the symmetric case min i∈V f (v i ) = δ and min i∈V f (u i ) ≤ 2δ.
Let's prove case (1).Observe that min i∈V f (u i ) > δ and min i∈V f 3), by (3.2), it is equivalent to prove (3.10).Then by Lemma 3.4, case (1) follows if α(d) > α 0 (d) + 1.Notice that in case (1), by Lemma 3.4, when α(d) > α 0 (d) + 1, (u, v) = (U, U ) is the only point where d dt (L(Φ t (x))) (u,v) ≤ 0 can hold with equality.Now we prove case (2).We only prove the case min i∈V f (u i ) = δ and min i∈V f where the last step is by (3.6), which actually holds for any collection of positive numbers.Since Two particles' repelling random walks it follows from the assumptions min i∈V f (u i ) = δ and min i∈V f Hence, In case (3), we just prove the case min i∈V f (u i ) = δ and min i∈V f (v i ) ≤ 2δ.First we can choose α > log 2 d such that 3  2 d 1/α < 3. Then by the definition of D δ , (3.15) and (3.16) together imply we prove the lemma.

The main lemma
Now it comes to the main lemma to characterize the chain recurrent set CR (Φ) for our specific semiflow Φ.
Note that M 1 = S δ , M 2 = D, and (U, U ) ∈ M 2 \ M 1 when δ is small enough.By Lemma 3.1 and Proposition 2.9, M j (j = 0, 1, 2) are compact invariant sets.Clearly, the lemma will follow once we prove: Lemma 3.6 is an easy application of a theorem due to Pemantle [12] which, to our purposes, is stated as Theorem 3.7.[12,Theorem 1] Let z(n) be a stochastic process satisfying with E(u n |F n ) = 0. Assume that z(n) always remains in a bounded domain D. Let p be any point in • D with F (p) = 0, and N be a neighborhood of p. Assume that there are constants c 1 , c 2 > 0 for which the following conditions are satisfied whenever z(n) ∈ N and n is sufficiently large: (1) p is an unstable critical point, The rest of this section is to verify that z(n) in (1.4) satisfies the conditions of Theorem 3.7 with p = (U, U ). First it is easy to check that p is a critical point of the vector field F in (2.8), i.e.F (U, U ) = 0. Before proving that p is unstable, we need to introduce a formal definition.Definition 3.8 (attracting/unstable point).Let T be the linear approximation to some vector field F near a critical point p so that F (p + w) = T (w) + O(|w| 2 ), then (a) If all the eigenvalues of T have strictly negative real part, p is called an attracting point.
(b) If some eigenvalues of T have strictly positive real part, p is called an unstable point.Lemma 3.9.
Proof.In a neighborhood of (U, U ), F has the Taylor expansion where DF | p is the Jacobian matrix at p and w is some vector in a neighborhood of 0 (a 2d dimensional vector).By direct calculation, DF | p has expression . . .(2), which is the statement of the following lemma.
Lemma 3.10.In a small neighborhood of p = (U, U ), there exists some constant c 1 > 0, Proof.For any fixed i, j ∈ V , conditioning on the event that X n+1 = i, Y n+1 = j, (3.24) Now we will prove that for any unit vector θ ∈ T D, its maximum coordinate is bounded from below by a positive number, and more precisely,  (3.28) Then by 2d k=d+1 θ k = 0, there also exists some j 0 ∈ V , s.t.θ j0+d ≥ 0. Because (x(n), y(n) lives in a small neighborhood of (U, U ), π(x(n)) and π(y(n)) also live in a small neighborhood of (U, U ), and hence both m∈V π m (y(n))θ m and s∈V π s (x(n))θ s+d in (3.24) are close to zero.Therefore, by (3.24),

E((u
Again by the fact that π(x(n)) and π(y(n)) live in a small neighborhood of (U, U ), both P(X n+1 = 1|F n ) and P(Y n+1 = j 0 |F n ) are close to 1/d.Then together with (3.28), it follows that E((u n • θ) + |F n ) is uniformly bounded from below by some positive constant.This completes the proof.
Finally, we can apply Theorem 3.7, obtaining Lemma 3.6.This also completes the proof of Theorem 1.1.

Further problems
This paper is a starting point to understand the behavior of multi-particle repelling random walks.The general question remains widely open.The dynamical approach should still work, but the corresponding dynamical system only gets more complex and harder to analyze.
Regarding the model we just studied, we conjecture that Notice that when δ = 0, the vector field F is not well defined on the boundary of D. Particularly, F won't be continuous at the boundary, and hence Theorem 2.8 and then the proof of Theorem 1.1 are invalid in this case.

Theorem 1 . 1 .
For any fixed positive integer d ≥ 3, there exists some α(d), s.t.when α ≥ α(d), for any fixed δ > 0 in the definition of (1.2) and (1.3), the two components x(n) and y(n) of z(n) in (1.4) asymptotically have joint support bounded by 4δ almost surely, Denote the relative interior of D by • D, and the boundary of D by ∂D ; 3. Let T D be the set identified with the tangent space to D at each point

Finally, we need
to combine the two cases together.From case (1), for fixed d and α > d − 2, there exists a neighborhood of the uniform distribution N (U, α ) s.t. for any u ∈ N (U, α ), (3.5) holds.For any u = U in • α, g(u 1 , . . ., u d ) in Lemma 3.2 is an increasing function in α.This allows us to take some common neighborhood N (U, d ) = α>d−1 N (U, α ) just depending on d such that (3.5) holds.Take some κ = κ(d) < 1 such that

. 10 )
with equality if and only if (u, v) = (U, U ) .Proof.First for any fixed u, v ∈ • ∆, we will bound the left hand side of (3.10) from below by a function of the minimum coordinates of u and v.More precisely, we construct two d−dimensional vectors u and v by the minimum coordinates of u and v u = min i∈V u i , . . ., min i∈V u i , 1 − (d − 1) min i∈V u i and v = 1 − (d − 1) min i∈V v i , min i∈V v i , . . ., min i∈V v i , and then we will prove that d i=1

( 2 )
E((u n •θ) + |F n ) ≥ c 1 for every unit vector θ ∈ T D (see the definition of T D in Notation 2.1), (3) u n ≤ c 2 ,where (u n • θ) + = max{u n • θ, 0} is the positive part of u n • θ.Assume that F is smooth enough to apply the stable manifold theorem: at least C 2 .Then P lim n→∞ z(n) = p = 0.