Conditional propagation of chaos for mean field systems of interacting neurons

: We study the stochastic system of interacting neurons introduced in De Masi et al. (2015) and in Fournier and L(cid:246)cherbach (2016) in a di(cid:27)usive scaling. The system consists of N neurons, each spiking randomly with rate depending on its membrane potential. At its spiking time, the potential of the spiking neuron is reset to 0 and all other neurons receive an additional amount of potential which is a centred random variable of order 1 {? N. In between successive spikes, each neuron’s potential follows a deterministic (cid:29)ow. We prove the convergence of the system, as N (cid:209) 8, to a limit nonlinear jumping stochastic di(cid:27)erential equation driven by Poisson random measure and an additional Brownian motion W which is created by the central limit theorem. This Brownian motion is underlying each particle’s motion and induces a common noise factor for all neurons in the limit system. Conditionally on W, the di(cid:27)erent neurons are independent in the limit system. This is the conditional propagation of chaos property. We prove the well-posedness of the limit equation by adapting the ideas of Graham (1992) to our frame. To prove the convergence in distribution of the (cid:28)nite system to the limit system, we introduce a new martingale problem that is well suited for our framework. The uniqueness of the limit is deduced from the exchangeability of the underlying system.


Introduction
This paper is devoted to the study of the Markov process X N t = (X N,1 t , . . . , X N,N t ) taking values in R N and having generator A N which is defined for any smooth test function ϕ : R N → R by where x = (x 1 , . . . , x N ) and where e j denotes the j−th unit vector in R N . In the above formula, α > 0 is a fixed parameter and ν is a centred probability measure on R having a second moment.
Informally, the process (X N,j ) 1≤j≤N solves where the U j (s) are i.i.d. centred random variables distributed according to ν, and where for each 1 ≤ j ≤ N, Z N,j is a simple counting process on R + having stochastic intensity s → f X N,j s− .
The particle system (1.1) is a version of the model of interacting neurons considered in [5], inspired by [11], and then further studied in [10] and [4]. The system consists of N interacting neurons. In (1.1), Z N,j t represents the number of spikes emitted by the neuron j in the interval [0, t] and X N,j t the membrane potential of the neuron j at time t. Spiking occurs randomly following a point process of rate f (x) for any neuron of which the membrane potential equals x. Each time a neuron emits a spike, the potentials of all other neurons receive an additional amount of potential. In [5], [10] and [4] this amount is of order N −1 , leading to classical mean field limits as N → ∞. On the contrary to this, in the present article we study a diffusive scaling where all neurons j receive the same random quantity U/ √ N at spike times t of neuron i, i = j. The random variable U is centred modeling the fact that the synaptic weights are balanced. Moreover, right after its spike, the potential of the spiking neuron i is reset to 0, interpreted as resting potential. Finally, in between successive spikes, each neuron has a loss of potential of rate α.
Before introducing the exact limit equation for the system (1.1), let us explain informally how the limit particle system associated to X N,i 1≤i≤N should a priori look like. Suppose for the moment that we already know that there exists a process (X 1 ,X 2 ,X 3 , . . .) ∈ D(R + , R) N * such that for all K > 0, L(X N,1, , . . . , X N,K ) converges weakly to L(X 1 , . . . ,X K ) in D(R + , R) K , as N → ∞, holds. In equation (1.1) the only term that depends on N is the martingale term which is approximately given by Because of the scaling in N −1/2 , the limit martingale M t will be a stochastic integral with respect to some Brownian motion, and its variance the limit of f (X N,j s )

Conditional propagation of chaos
where σ 2 is the variance of any of the U j (s). Therefore, the limit martingale must be of the form where µ N s is the empirical measure of the system X N,j s 1≤j≤N and W is a one-dimensional standard Brownian motion.
Since the law of the N −particle system (X N,1 , . . . , X N,N ) is symmetric, the law of the limit systemX = (X 1 ,X 2 ,X 3 , . . .) must be exchangeable, that is, for all finite permutations σ, we have that L(X σ(1) ,X σ(2) , . . .) = L(X). In particular, the theorem of Hewitt-Savage, see [13], implies that the random limit exists. Supposing that µ N s converges, it necessarily converges towards µ s . Therefore,X should solve the limit system where eachZ i has intensity t → f (X i t− ), and where µ s is given by (1.2). The above arguments are made rigorous in Sections 3.1 and 3.2 below. Let us briefly discuss the form of the limit equation (1.3). As we have already observed in a different framework in our previous paper [9], the scaling in N −1/2 in (1.1) creates a Brownian motion W in the limit system (1.3). We will show that the presence of this Brownian motion entails a conditional propagation of chaos property, that is the conditional independence of the particles given W . In particular, the limit measure µ s will be random. This differs from the classical framework, where the scaling is in N −1 (see e.g. [6], [8] in the framework of Hawkes processes, and [5], [10] and [4] in the framework of systems of interacting neurons), leading to a deterministic limit measure µ s and the true propagation of chaos property implying that the particles of the limit system are independent. This is not the first time that conditional propagation of chaos is studied in the literature; it has already been considered e.g. in [2], [3] and [7]. But in these papers the common noise, represented by a common (maybe infinite dimensional) Brownian motion, is already present at the level of the finite particle system, the mean field interactions act on the drift of each particle, and the scaling is the classical one in N −1 . On the contrary to this, in our model, this common Brownian motion is only present in the limit, and it is created by the central limit theorem as a consequence of the joint action of the small jumps of the finite size particle system. Moreover, in our model, the interactions survive as a variance term in the limit system due to the diffusive scaling in N −1/2 . Now let us discuss the form of the random measure µ s appearing in (1.2). The theorem of Hewitt-Savage, [13], implies that the law of X i s i≥1 is a mixture directed by the law of µ s . As it has been remarked by [2] and [3], this conditioning reflects the dependencies between the particles.
We will show that the variablesX i are conditionally independent given the Brownian motion W. As a consequence, µ s is necessarily given by the conditional law of the solution given the Brownian motion, that is, P −almost surely, To do so, we first prove that under suitable conditions on the parameters of the system, the sequence µ N is tight. We then follow a classical road and identify every possible limit as solution of a martingale problem. Since the random limit measure µ will only be the directing measure of the limit system (that is, the conditional law of each coordinate, but not its law), this martingale problem is not a classical one. It is in particular designed to reflect the correlations between the particles and to describe all possible limits of couples of neurons. To identify µ as conditional law knowing W, that is, to prove that the only common randomness is the one present in the driving Brownian motion W, we introduce an auxiliary particle system which is a mean field particle version of the limit system, constructed with the same underlying Brownian motion, and we provide an explicit control on the distance between the two systems.
Organisation of the paper. In Section 2, we state the assumptions, and formulate the main results on the well-posedness of the limit system, Theorems 2.6 and 2.11. Section 3 is devoted to the proof of the convergence of µ N := N j=1 δ X N,j (Theorem 2.13).
In particular, we introduce our new martingale problem in Section 3.2 and prove the uniqueness of the limit law in Theorem 3.4. Finally, some of our proofs are gathered in Section 4.

Notation
We use the following notation throughout the paper. If E is a metric space, we note P(E) the space of probability measures on E endowed with the topology of the weak convergence.
For any n, p ∈ N * , we note C n b (R p ) (resp. C n b (R p , R + )) the set of real-valued functions g (resp. non-negative functions g) defined on R p which are n times continuously differentiable such that g (k) is bounded for each 0 ≤ k ≤ n, and C n c (R p ) the set of real-valued functions g ∈ C n b (R p ) that have a compact support. In addition, in what follows D(R + , R) denotes the space of càdlàg functions from R + to R, endowed with the Skorokhod metric, and C and K denote arbitrary positive constants whose values can change from line to line in an equation. We write C θ and K θ if the constants depend on some parameter θ.

The finite system
We consider, for each N ≥ 1, a family of i.i.d. Poisson measures (π i (ds, dz, du)) i=1,...,N on R + × R + × R having intensity measure dsdzν(du) where ν is a probability measure on R, as well as an i.i for t ≥ 0, The coefficients of this system are the exponential loss factor α > 0, the jump rate function f : R → R + and the probability measures ν and ν 0 .
In order to guarantee existence and uniqueness of a strong solution of (2.1), we introduce the following hypothesis. In addition, we also need the following condition to obtain a priori bounds on some moments of the process X N,i 1≤i≤N . Assumption 2.2. Assume that R xdν(x) = 0, R x 2 dν(x) < +∞, and R x 2 dν 0 (x) < +∞.
Under Assumptions 2.1 and 2.2, existence and uniqueness of a strong solution of (2.1) follow from Theorem IV.9.1 of [14], exactly in the same way as in Proposition 6.6 of [9]. We now define precisely the limit system and discuss its properties before proving the convergence of the finite to the limit system.

The limit system
The limit system X i i≥1 is an exchangeable system given by In the above equation, (W t ) t≥0 is a standard one-dimensional Brownian motion, π i (i ≥ 1) are independent Poisson random measures on R 2 + having intensity dt · dz that are independent of W , and W = σ{W t , t ≥ 0}. Moreover, the initial positionsX i 0 , i ≥ 1, are i.i.d., independent of W and of the Poisson random measures, distributed according to ν 0 which is the same probability measure as in (2.1).
The limit equation (2.2) is not clearly well-posed and requires additional conditions. Let us briefly comment on the type of difficulties that one encounters when dealing with (2.2). Roughly speaking, the jump terms demand to work in an L 1 −framework, while the diffusive terms demand to work in an L 2 −framework. [12] proposes a unified approach to deal both with jump and with diffusion terms in a non-linear framework, and we shall rely on his ideas in the sequel. The presence of the random volatility term which involves conditional expectation causes however additional technical difficulties. Finally, another difficulty comes from the fact that the jumps induce non-Lipschitz terms of the formX i s f (X i s ). For this reason a classical Wasserstein-1−coupling is not appropriate for the jump terms. Therefore we propose a different distance which is inspired by the one already used in [10]. To do so, we need to work under the following additional assumption. Assumption 2.3. 1. We suppose that inf f > 0.
2. There exists a function a ∈ C 2 b (R, R + ), strictly increasing, such that, for some constant C, for all x, y ∈ R, for all x ∈ R, where C and ε are some positive constants, satisfies Assumption 2.3 with where ψ is any smooth non-negative function satisfying ψ(y) = |y| for |y| ≥ 1. If (2.3) holds with ε = 1, then we may choose simply a(x) = arctan(x) + π/2.
Finally, fix some −∞ < a < b < ∞. Then any function f ∈ C 1 b (R, R + ) which is constant below a and above b satisfies Assumption (2.3).
Let us note that this kind of function is interesting from a neuroscience point of view, if it is in addition non decreasing. Indeed, when the potential of a neuron is below a (resp. above b), its spiking rate is minimal (resp. maximal), such that the neuron can be considered as inactive (resp. active).
Under these additional assumptions we obtain the well-posedness of each coordinate 2. If additionally, R x 2 dν 0 (x) < +∞, then there exists a unique strong solution (X t ) t≥0 of the nonlinear SDE (2.4), which is (F t ) t − adapted with càdlàg trajectories, satisfying for every t > 0, We now given the proof of Item 1. of the above theorem. The proof of Item 2. which follows a classical Picard iteration is postponed to Section 4.
Proof of Item 1. of Theorem 2.6. Consider two strong solutions (X t ) t≥0 and (X t ) t≥0 , (F t ) t −adapted, defined on the same probability space and driven by the same Poisson random measure π and the same Brownian motion W, and withX 0 =X 0 . We EJP 26 (2021), paper 20.
Using Itô's formula, we can write where A t denotes the bounded variation part of the evolution, M t the martingale part and ∆ t the sum of the three jump terms. Notice that is a square integrable martingale since f and a are bounded by Assumption 2.3. We wish to obtain a control on |Z * t | := sup s≤t |Z s |. We first take care of the jumps of |Z t |. Notice first that, since f and a are bounded, Moreover, for a constant C depending on σ 2 , f ∞ , a ∞ , a ∞ , a ∞ and α, We know that and thus, EJP 26 (2021), paper 20.
Putting all these upper bounds together we conclude that for a constant C not depending on t, Finally, we treat the martingale part using the Burkholder-Davis-Gundy inequality, and we obtain where we have used that |a (x) − a (y)| ≤ C|a(x) − a(y)| and that f and a are bounded.
The above upper bounds imply that, for a constant C not depending on t nor on the initial condition, and therefore, for t 1 sufficiently small, E|Z * t1 | = 0. We can repeat this argument on intervals [t 1 , 2t 1 ], with initial conditionX t1 , and iterate it up to any finite T because t 1 does only depend on the coefficients of the system but not on the initial condition. Recalling the definition of |Z * t | and the fact that the function a is increasing (and hence bijective), this implies the assertion. In the sequel, we shall also use an important property of the limit system (2.2), which is the conditional independence of the processesX i (i ≥ 1) given the Brownian motion W . Proposition 2.9. If Assumptions 2.3 holds and R x 2 dν 0 (x) < ∞, then (i) for all N ∈ N * there exists a strong solution X i 1≤i≤N of (2.2), and pathwise uniqueness holds, EJP 26 (2021), paper 20.
The proof of Proposition 2.9 is postponed to Section 4. Let us finally mention that the random limit measure µ satisfies a nonlinear stochastic PDE in weak form. More precisely, Remark 2.10. Grant Assumption 2.3. Then the measure (µ t ) t≥0 = (P (X t ) ∈ ·|W)) t≥0 satisfies the following nonlinear stochastic PDE in weak form: for any ϕ ∈ C 2 b (R), for any t ≥ 0,

Another exchangeable system
We have already stated the existence of a unique strong solution (X i ) i≥1 of the system (2.2). In the sequel we also need to show the well-posedness of the following exchangeable system of SDEs: where µ is the directing measure of the exchangeable system (Ȳ i ) i≥1 1 and µ t its projection onto the t−th time coordinate. According to our previous reasoning, any solution of (2.2) is also solution of (2.7). But the converse is not obvious, because it is not a priori clear whether the Brownian motion is the only common noise of the system (2.7). We claim it in the next result. Theorem 2.11. Grant Assumption 2.3 and suppose that R x 2 dν 0 (x) < ∞. Then there exists a unique strong solution (Ȳ i ) i≥1 of (2.7). This solution is given by the unique strong solution of (2.2).
The proof of Theorem 2.11 is given in Section 4.

Convergence to the limit system
In order to prove the convergence of the finite particle system to the limit system, we need to assume that the measure ν has a finite third moment.
We are now able to state our main result.
is endowed with the product topology.
Proof. Together with the statement of Theorem 2.13, the proof is an immediate consequence of Proposition 7.20 of [1].

Remark 2.15.
In the statement of Corollary 2.14, we implicitly define X N,i := 0 if i > N.
The following figure presents two simulations of the process X N,1 with N = 10 and N = 1000. Recalling that X N,1 is interpreted as the membrane potential of a given neuron in a network of N neurons, we can see on the simulations the spiking times of this neuron, which are the times where the potential jumps to zero.

Proof of Theorem 2.13
This section is dedicated to prove that the sequence (µ N ) N of the empirical measures converges in distribution to µ = L(X 1 |W), where (X j ) j≥1 is solution of (2.2).
In a first time, we prove that the sequence (µ N ) N is tight on P(D(R + , R)). The main step to prove the convergence of (µ N ) N is then to show that each converging subsequence converges to the same limit in distribution. For this purpose, we introduce a new martingale problem, and we show that every possible limit of µ N is a solution of this martingale problem. The result then follows from the fact that this martingale problem possesses a unique solution which is proved using the exchangeability of the associated system of processes.
Proof. First, it is well-known that point (ii) follows from point (i) and the exchangeability of the system, see [ To check (a), consider (S, S ) ∈ A δ,T and write since f is bounded. We proceed similarly to check that The term sup 0≤s≤T |X N,1 s | can be handled using Lemma 4.1.(ii).
Finally (b) is a straightforward consequence of Lemma 4.1.(ii) and Markov's inequality.

Martingale problem
We now introduce a new martingale problem, whose solutions are the limits of any converging subsequence of µ N = 1 N N j=1 δ X N,j . In this martingale problem, we are interested in couples of trajectories to be able to put hands on the correlations between the particles. In particular, this will allow us to show that, in the limit system (2.2), the processesX i (i ≥ 1) share the same Brownian motion, but are driven by Poisson measures π i (i ≥ 1) which are independent. The reason why we only need to study the correlation between two particles is the exchangeability of the infinite system.
Definition 3.2. We say that Q ∈ P(P(D(R + , R))) is a solution to the martingale problem (M) if the following holds. ).
Let (X i ) i≥1 be the solution of the limit system (2.2) and µ = L(X 1 |W). Then Proposition 2.9.(ii) and Lemma (2.12).(a) of [1] imply that µ is the directing measure of (X i ) i≥1 . Thus the law of (µ,X 1 ,X 2 ) is P given in (3.1). And, by Itô's formula, (X 1 ,X 2 ) satisfies the martingale property of Definition 3.2. In other words, L(µ) is a solution of (M).
Let us now characterize any possible solution of (M), which is the first step to prove uniqueness of the solution of (M). Lemma 3.3. Let Q ∈ P(P(D(R + , R))). Assume that Q is a solution of (M) and that f is bounded. Let (µ, Y ) be the canonical variable defined above, and write Y = (Y 1 , Y 2 ). Then there exists a standard (G t ) t −Brownian motion W and on an extension (Ω, (G t ) t ,P ) of (Ω, (G t ) t , P ) there exist (G t ) t − Poisson random measures π 1 , π 2 on R + × R + having Lebesgue intensity such that W, π 1 and π 2 are independent and Proof. Item (ii) of (M) together with Theorem II.2.42 of [15] imply that Y is a semimartingale with characteristics (B, C, ν) given by ). Then we can use the canonical representation of Y (see Theorem II.2.34 of [15]) with the truncation function h(y) = y for every y: where M c is a continuous local martingale and M d a purely discontinuous local martingale. By definition of the characteristics, M c,i , M c,j t = C i,j t . In particular, M c,i t = t 0 µ s (f )ds (i = 1, 2). Consequently, applying Theorem II.7.1' of [14] to the 2-dimensional martingale (M c,1 , M c,2 ), we know that there exists a Brownian motion W such that We now prove the existence of the independent Poisson measures π 1 , π 2 . We know Ys) is the jump measure of Y and ν EJP 26 (2021), paper 20. is its compensator. We rely on Theorem II.7.4 of [14]. Using the notation therein, we introduce Z = R + , m the Lebesgue measure on Z and According to Theorem II.7.4 of [14], there exists a Poisson measure π on R + × R + having intensity dt · dz such that, for all E ∈ B(R 2 ), In what follows we show how to construct two independent Poisson random measures π 1 and π 2 from π with the desired representation property, using two disjoint parts of π. For π 1 we use π |R+×[0,||f ||∞] , and for π 2 we use π |R+×[||f ||∞,2||f ||∞] such that the Poisson measures π 1 and π 2 will be independent.
To construct π 1 and π 2 , we also consider two independent Poisson measuresπ 1 ,π 2 (independent of everything else) on [||f || ∞ , ∞[ having Lebesgue intensity. We then define π 1 in the following way: For any A ∈ B(R + × [0, ||f || ∞ ]), π 1 (A) = π(A), and for A ∈ B(R + ×]||f || ∞ , ∞[), π 1 (A) =π 1 (A). We define π 2 in a similar way: . By definition of Poisson random measures, π 1 and π 2 are independent Poisson measures on R 2 + having Lebesgue intensity, and together with (3.2), we have the desired representation In the next step we prove that there exists at most one (and thus exactly one) solution for the martingale problem (M) using Lemma 3.3 and Theorem 2.11. The main idea of the proof is to apply Lemma 3.3 to recover the system of SDEs (2.7) and then to rely on Theorem 2.11.
Proof. Let Q ∈ P(P(D(R + , R))) be a solution of (M) and write Q = L(µ). The proof consists in showing that µ is the distribution of the directing measure of the system (2.7), which is unique by Theorem 2.11.
To begin with, we can assume that µ is the directing measure of some exchangeable system (Ȳ i ) i≥1 . Indeed, it is sufficient to work on the canonical space Ω = P(D(R + , R)) × D(R + , R) N * endowed with the probability measure P defined as follows. For all A ∈ B(P(D(R + , R))) and B k ∈ B(D(R + , R)) (k ≥ 1) where at most a finite number of sets B k (k ≥ 1) are different from D(R + , R), Then, noting (µ,Ȳ 1 ,Ȳ 2 , ...,Ȳ k , ...) the canonical random variables on Ω , we know that µ is the directing measure of the exchangeable system (Ȳ i ) i≥1 . In particular, for all i = j, L(µ, (Ȳ i ,Ȳ j )) = P, EJP 26 (2021), paper 20.
We summarize the above step. We have just shown that there exist a Brownian motion W and independent Poisson random measures π i (i ≥ 1) with Lebesgue intensity, independent of W, such that, for all i ≥ 1, As a consequence, (Ȳ i ) i≥1 is solution to (2.7), and Theorem 2.11 allows to conclude.
The last missing point to prove our main result, Theorem 2.13, is the following Firstly, we can write We have that EJP 26 (2021), paper 20.
The number of big jumps of X N,1 in ]t − ε, t + ε[ is smaller than a random variable ξ having Poisson distribution with parameter 2ε||f || ∞ . Hence The small jumps that where C does not depend on N nor ε. This last inequality is in contradiction with (3.3) since P (E) does not depend on ε.
We use the notationπ j (dr, dz, du) = π j (dr, dz, du) − drdzν(du) and set where the two terms of the second line in the expression of Γ N,i s,t can be artificially introduced since R udν(u) = 0.
The associated martingales and error terms are given by M N,i s,t + W N,i s,t + ∆ N,i s,t + Γ N,i s,t + R N,i s,t .
Using exchangeability and the boundedness of the ϕ j , ψ j (1 ≤ j ≤ k) and the fact that Since f is bounded and ϕ ∈ C 3 b (R 2 ), Taylor-Lagrange's inequality implies then that Finally, using that F is bounded and almost surely continuous at µ (see Step 1 ), we have concluding our proof.

Let us end this section with the
Proof of Theorem 2.13. According to Proposition 3.1, the sequence (µ N ) N is tight. Besides, thanks to Theorem 3.5, any limit Q of a converging subsequence of (L(µ N )) N is solution to the martingale problem.
By Theorem 3.4, there is a unique such distribution Q which can be written as Q = L(µ), with µ = L(X 1 |W), where (X j ) j≥1 is solution of (2.2). This implies the result.

A priori estimates
In this subsection, we prove useful a priori upper bounds on some moments of the solutions of the SDEs (2.1) and (2.4).  Proof.

Conditional propagation of chaos
Step 2: Now we prove (ii).
To conclude the proof, it is now sufficient to notice that is uniformly bounded in N , since f is bounded, and to use the point (i) of the lemma.

Lemma 4.2.
Suppose that f is bounded and that R x 2 dν 0 (x) < ∞. Then any solution Proof. We first prove the weaker result By Itô's formula, Introducing, for any M > 0, τ M := inf{t > 0 :   This implies that the stopping times τ M tend to infinity as M goes to infinity. (4.1) is a then a consequence of (4.3) and Fatou's lemma. Then, using Burkholder-Davis-Gundy inequality to control the martingale part in (4.2), we have, for all t ≥ 0, Then the result follows from point (4.1).

Proof of Proposition 2.9
We now give the in other words, our process is non-anticipative and does only depend on the underlying noise up to time t. Then we can write, for all continuous bounded functions g, h, With the same reasoning, we show that E g(X i ) W = ψ i (W ) and E h(X j ) W = ψ j (W ). The same arguments prove the mutual independence ofX 1 , . . .X N conditionally to W.
(iii) Using the representationX k |[0,t] = Φ t (X k 0 , π k , W ), we can write for any continuous and bounded function g : D([0, t], R) → R, Using the law of large numbers on the account of the sequence of i.i.d. PRM's and working conditionally on W, we obtain that where we have used (4.5).

Proof of Theorem 2.11
We are finally able to give the Proof of Theorem 2.11.
Step 1: Let us begin by proving that any solution (X i ) i≥1 of (2.2) is solution of (2.7).
By Proposition 2.9.(ii), conditionally to W, the variablesX i Step 2. It is now sufficient to prove that (X i ) i≥1 is the only solution of (2.7) defined w.r.t. the same Brownian motion, Poisson random measures and initial conditions. For that sake, let us consider (Ȳ i ) i≥1 any solution of (2.7), and prove that (X i ) i≥1 = (Ȳ i ) i≥1 almost surely. In the rest of the proof, µ t denotes only the directing measure of the system (Ȳ i t ) i≥1 . So we want to prove that µ t (f ) := E f (Ȳ 1 t ) µ = E f (Ȳ 1 t ) W a.s.. To begin with, Lemma (2.15) of [1] implies that µ t (f ) is the almost sure limit of N −1 N j=1 f (Ȳ j t ). We now prove that this sequence converges to E f (Ȳ 1 t ) W . For this purpose, we introduce an auxiliary system ( X N,i ) 1≤i≤N , driven by the same Brownian motion W and the same Poisson random measures π i , withȲ i 0 =X N,i 0 (i ≥ 1), replacing the term µ t (f ) by the empirical measure: Notice that (X i ) i≥1 , (Ȳ i ) i≥1 and (X N,i ) 1≤i≤N are all defined on the same probability space, driven by the same Brownian motion W and the same Poisson random measures π i . It is now sufficient to prove that both for (Ȳ i ) i≥1 and for (X i ) i≥1 , Indeed, suppose we have already proven the above control (4.6). Then Then, (4.6) and Assumption 2.3 imply that the first and the second term of the sum above are smaller than C t N −1/2 for some C t > 0. In addition, by item (ii) of Proposition 2.9, the variables (X j ) 1≤j≤N are i.i.d., conditionally on W. Consequently, and since f is bounded, the third term is smaller than C t N −1/2 . The above implies that, as N → ∞, 1 N N j=1 f (Ȳ j t ) converges in L 1 (P ) to E f (X 1 t ) W . On the other hand, we know this sequence converges almost surely to µ t (f ). Thus,

ds.
Recall that the variablesȲ j s (1 ≤ j ≤ N ) are i.i.d. conditionally to µ. Hence we may take conditional expectation E(·|µ) and use the fact that f is lower bounded such that EJP 26 (2021), paper 20.
to deduce that E < M N > t ≤ C t N −1 and E B N t ≤ C t N −1 .
Then, applying Itô's formula on a(X N,1 ), we obtain the same equation as (4.7), but without the terms B N t and M N t . Introducing u(t) := sup 0≤s≤t E a(Ȳ 1 s ) − a(X N,1 s ) , we can prove with the same reasoning as in the proof of Theorem 2.6 that where C and C t are independent of N . Finally, using the arguments of the proof of Theorem 2.6, this implies (4.6).