Stochastic Homogenization of Reflected Diffusion Processes

We investigate a functional limit theorem (homogenization) for Reflected Stochastic Differential Equations on a half-plane with stationary coefficients when it is necessary to analyze both the effective Brownian motion and the effective local time. We prove that the limiting process is a reflected non-standard Brownian motion. Beyond the result, this problem is known as a prototype of non-translation invariant problem making the usual method of the"environment as seen from the particle"inefficient.


Statement of the problem
This paper is concerned with homogenization of Reflected Stochastic Differential Equations (RSDE for short) evolving in a random medium, that is (see e.g.[12]) Definition 1.1.(Random medium).Let (Ω, G, µ) be a probability space and τ x ; x ∈ R d be a group of measure preserving transformations acting ergodically on Ω, that is: 1) ∀A ∈ G, ∀x ∈ R d , µ(τ x A) = µ(A), 2) If for any x ∈ R d , τ x A = A then µ(A) = 0 or 1, 3) For any measurable function g on (Ω, G, µ), the function The expectation with respect to the random medium is denoted by M. In what follows we shall use the bold type to denote a random function g from Ω × R p into R n (n ≥ 1 and p ≥ 0).
A random medium is a mathematical tool to define stationary random functions.Indeed, given a function f : Ω → R, we can consider for each fixed ω the function x ∈ R d → f (τ x ω).This is a random function (the parameter ω stands for the randomness) and because of 1) of Definition 1.1, the law of that function is invariant under R d -translations, that is both functions f (τ • ω) and f (τ y+• ω) have the same law for any y ∈ R d .For that reason, the random function is said to be stationary.
We suppose that we are given a random d × d-matrix valued function σ : Ω → R d×d , two random vector valued functions b, γ : Ω → R d and a d-dimensional Brownian motion B defined on a complete probability space (Ω ′ , F , P) (the Brownian motion and the random medium are independant).We shall describe the limit in law, as ε goes to 0, of the following RSDE with stationary random coefficients where X ε , K ε are (F t ) t -adapted processes (F t is the σ-field generated by B up to time t) with constraint X ε t ∈ D, where D ⊂ R d is the half-plane {(x 1 , . . ., x d ) ∈ R d ; x 1 > 0}, K ε is the so-called local time of the process X ε , namely a continuous nondecreasing process, which only increases on the set {t; X ε t ∈ ∂D}.The reader is referred to [14] for strong existence and uniqueness results to (1) (see e.g [23] for the weak existence), in particular under the assumptions on the coefficients σ, b and γ listed below.Those stochastic processes are involved in the probabilistic representation of second order partial differential equations in half-space with Neumann boundary conditions (see [18] for an insight of the topic).In particular, we are interested in homogenization problems for which it is necessary to identify both the homogenized equation and the homogenized boundary conditions.
Without the reflection term γ(X ε t /ε) dK ε t , the issue of determining the limit in (1) is the subject of an extensive literature in the case when the coefficients b, σ are periodic, quasi-periodic and, more recently, evolving in a stationary ergodic random medium.Quoting all references is beyond the scope of this paper.Concerning homogenization of RSDEs, there are only a few works dealing with periodic coefficients (see [1,2,3,22]).As pointed out in [2], homogenizing (1) in a random medium is a well-known problem that remains unsolved yet.There are several difficulties in this framework that make the classical machinery of diffusions in random media (i.e.without reflection) fall short of determining the limit in (1).In particular, the reflection term breaks the stationarity properties of the process X ε so that the method of the environment as seen from the particle (see [16] for an insight of the topic) is inefficient.Moreover, the lack of compactness of a random medium prevents from using compactness methods.The main resulting difficulties are the lack of invariant probability measure (IPM for short) associated to the process X ε and the study of the boundary ergodic problems.The aim of this paper is precisely to investigate the random case and prove the convergence of the process X ε towards a reflected Brownian motion.The convergence is established in probability with respect to the random medium and the starting point x.
We should also point out that the problem of determining the limit in (1) could be expressed in terms of reflected random walks in random environment, and remains quite open as well.In that case, the problem could be stated as follows: suppose we are given, for each z ∈ Z d satisfying |z| = 1, a random variable c(•, z) : Ω →]0; +∞[.Define the continuous time process X with values in the half-lattice L = N × Z d−1 as the random walk that, when arriving at a site x ∈ L, waits a random exponential time of parameter 1 and then performs a jump to the neighboring sites y ∈ L with jump rate c(τ x ω, y − x).Does the rescaled random walk εX t/ε 2 converge in law towards a reflected Brownian motion?Though we don't treat explicitly that case, our proofs can be adapted to that framework.

Structure of the coefficients
Notations: Throughout the paper, we use the convention of summation over repeated indices d i=1 c i d i = c i d i and we use the superscript * to denote the transpose A * of some given matrix A. If a random function ϕ : Ω → R possesses smooth trajectories, i.e. for any ω ∈ Ω the mapping x ∈ R d → ϕ(τ x ω) is smooth with bounded derivatives, we can consider its partial derivatives at 0 denoted by D i ϕ, that is We define a = σσ * .For the sake of simplicity, we assume that ∀ω ∈ Ω the mapping x ∈ R d → σ(τ x ω) is bounded and smooth with bounded derivatives of all orders.We further impose these bounds do not depend on ω.
Now we motivate the structure we impose on the coefficients b and γ.A specific point in the literature of diffusions in random media is that the lack of compactness of a random medium makes it impossible to find an IPM for the involved diffusion process.There is a simple argument to understand why: since the coefficients of the SDE driving the R d -valued diffusion process are stationary, any R d -supported invariant measure must be stationary.So, unless it is trivial, it cannot have finite mass.That difficulty has been overcome by introducing the "environment as seen from the particle" (ESFP for short).It is a Ω-valued Markov process describing the configurations of the environment visited by the diffusion process: briefly, if you denote by X the diffusion process then the ESFP should match τ X ω.There is a well known formal ansatz that says: if we can find a bounded function f : Ω → [0, +∞[ such that, for each ω ∈ Ω, the measure f (τ x ω)dx is invariant for the diffusion process, then the probability measure f (ω)dµ (up to a renormalization constant) is invariant for the ESFP.So we can switch an invariant measure with infinite mass asociated to the diffusion process for an IPM associated to the ESFP.
The remaining problem is to find an invariant measure (of the type f (τ x ω)dx) for the diffusion process.Generally speaking, there is no way to find it excepted when it is explicitly known.In the stationary case (without reflection), the most general situation when it is explicitly known is when the generator of the rescaled diffusion process can be rewritten in divergence form as where V : Ω → R is a bounded scalar function and H : Ω → R d×d is a function taking values in the set of antisymmetric matrices.The invariant measure is then given by e 2V (τ x/ǫ ω) dx and the IPM for the ESFP matches e 2V (ω) dµ.However, it is common to assume V = H = 0 to simplify the problem since the general case is in essence very close to that situation.Why is the existence of an IPM so important?Because it entirely determines the asymptotic behaviour of the diffusion process via ergodic theorems.The ESFP is therefore a central point in the literature of diffusions in random media.The case of RSDE in random media does not derogate this rule and we are bound to find a framework where the invariant measure is (at least formally) explicitly known.So we assume that the entries of the coefficients b and γ, defined on Ω, are given by With this definition, the generator of the Markov process X ε can be rewritten in divergence form as (for a sufficiently smooth function f on D) (4) If the environment ω is fixed, it is a simple exercise to check that the Lebesgue measure is formally invariant for the process X ǫ .If the ESFP exists, the aforementioned ansatz tells us that µ should be an IPM for the ESFP.Unfortunately, we shall see that there is no way of defining properly the ESFP.The previous formal discussion is however helpful to provide a good intuition of the situation and to figure out what the correct framework must be.Furthermore the framework (3) also comes from physical motivations.As defined above, the reflection term γ coincides with the so-called conormal field and the associated PDE problem is said to be of Neumann type.From the physical point of view, the conormal field is the "canonical" assumption that makes valid the mass conservation law since the relation a j1 (τ x/ε ω)∂ xj f = 0 on ∂D means that the flux through the boundary must vanish.Our framework for RSDE is therefore to be seen as a natural generalization of the classical stationary framework.
Remark.It is straightforward to adapt our proofs to treat the more general situation when the generator of the RSDE inside D coincides with (2).In that case, the reflection term is given by γ j = a j1 + H j1 .
That assumption means that the process X ǫ diffuses enough, at each point of D, in all directions.
It is thus is a convenient assumption to ensure the ergodic properties of the model.The reader is referred, for instance, to [5,20,21] for various situations going beyond that assumption.We also point out that, in the context of RSDE, the problem of homogenizing (1) without assuming (5) becomes quite challenging, especially when dealing with the boundary phenomena.

Main Result
In what follows, we indicate by P ε x the law of the process X ε starting from x ∈ D (keep in mind that this probability measure also depends on ω though it does not appear through the notations).Let us consider a nonnegative function χ : D → R + such that D χ(x) dx = 1.Such a function defines a probability measure on D denoted by χ(dx) = χ(x)dx.We fix T > 0. Let C denote the space of continuous D × R + -valued functions on [0, T ] equipped with the sup-norm topology.We are now in position to state the main result of the paper: with constraints Xt ∈ D and K is the local time associated to X.In other words, for each bounded continuous function F on C and δ > 0, we have The so-called homogenized (or effective) coefficients Ā and Γ are constant.Moreover Ā is invertible, obeys a variational formula (see subsection 2.5 for the meaning of the various terms) and Γ is the conormal field associated to Ā, that is Γi = Ā1i for i = 1, . . ., d.
Remark and open problem.The reader may wonder whether it may be simpler to consider the case γ i = δ 1i where δ stands for the Kroenecker symbol.In that case, γ coincides with the normal to ∂D.Actually, this situation is much more complicated since one can easily be convinced that there is no obvious invariant measure associated to X ǫ .On the other side, one may wonder if, given the form of the generator (4) inside D, one can find a larger class of reflection coefficients γ for which the homogenization procedure can be carried through.Actually, a computation based on the Green formula shows that it is possible to consider a bounded antisymmetric matrix valued function A : Ω → R d×d such that A ij = 0 whenever i = 1 or j = 1, and to set γ j = a j1 + D i A ji .In that case, the Lebesgue measure is invariant for X ǫ .Furthermore, the associated Dirichlet form (see subsection 2.3) satisfies a strong sector condition in such a way that the construction of the correctors is possible.However, it is not clear whether the localization technique based on the Girsanov transform (see Section 2.1 below) works.So we leave that situation as an open problem.
The non-stationarity of the problem makes the proofs technical.So we have divided the remaining part of the paper into two parts.In order to have a global understanding of the proof of Theorem 1.2, we set the main steps out in Section 2 and gather most of the technical proofs in the Appendix.

Guideline of the proof
As explained in introduction, what makes the problem of homogenizing RSDE in random medium known as a difficult problem is the lack of stationarity of the model.The first resulting difficulty is that you cannot define properly the ESFP (or a somewhat similar process) because you cannot prove that it is a Markov process.Such a process is important since its IPM encodes what the asymptotic behaviour of the process should be.The reason why the ESFP is not a Markov process is the following.Roughly speaking, it stands for an observer sitting on a particle X ǫ t and looking at the environment τ X ǫ t ω around the particle.For this process to be Markovian, the observer must be able to determine, at a given time t, the future evolution of the particle with the only knowledge of the environment τ X ǫ t ω.In the case of RSDE, whenever the observer sitting on the particle wants to determine the future evolution of the particle, the knowledge of the environment τ X ǫ t ω is not sufficient.He also needs to know whether the particle is located on the boundary ∂D to determine if the pushing of the local time K ǫ t will affect the trajectory of the particle.So we are left with the problem of dealing with a process X ǫ possessing no IPM.

Localization
To overcome the above difficulty, we shall use a localization technique.Since the process X ǫ is not convenient to work with, the main idea is to compare X ǫ with a better process that possesses, at least locally, a similar asymptotic behaviour.To be better, it must have an explicitly known IPM.There is a simple way to find such a process: we plug a smooth and deterministic potential V : D → R into (4) and define a new operator acting on C 2 ( D) with the same boundary condition γ i (τ x/ε ω)∂ xi = 0 on ∂D.If we impose the condition (8) D e 2V (x) dx = 1 and fix the environment ω, we shall prove that the RSDE with generator L ε V inside D and boundary condition γ i (τ x/ε ω)∂ xi = 0 on ∂D admits e 2V (x) dx as IPM.
Then we want to find a connection between the process X ǫ and the Markov process with generator L ε V inside D and boundary condition γ i (τ x/ε ω)∂ xi = 0 on ∂D.To that purpose, we use the Girsanov transform.More precisely, we fix T > 0 and impose (9) V is smooth and ∂ x V is bounded.
Then we define the following probability measure on the filtered space (Ω ′ ; F , (F t ) 0≤t≤T ) ) is a Brownian motion and the process X ε solves the RSDE (10) dX starting from X ε 0 = x, where K ε is the local time of X ε .It is straightforward to check that, if B * is a Brownian motion, the generator associated to the above RSDE coincides with (7) for sufficiently smooth functions.To sum up, with the help of the Girsanov transform, we can compare the law of the process X ǫ with that of the RSDE (10) We shall see that most of the necessary estimates to homogenize the process X ǫ are valid under P ε * x .We want to make sure that they remain valid under P ε x .To that purpose, the probability measure P ε x must be dominated by P ε * x uniformly with respect to ǫ.From (9), it is readily seen that ) 2 1/2 < +∞ (C only depends on T, |a| ∞ and sup D |∂ x V |).Then the Cauchy-Schwarz inequality yields (11) ∀ǫ > 0, ∀A F T -measurable subset , P ε x (A) In conclusion, we summarize our strategy: first we shall prove that the process X ǫ possesses an IPM under the modified law P ε * , then we establish under P ε * all the necessary estimates to homogenize X ǫ , and finally we shall deduce that the estimates remain valid under P ε thanks to (11).Once that is done, we shall be in position to homogenize (1).
To fix the ideas and to see that the class of functions V satisfying (8) ( 9) is not empty, we can choose V to be equal to (12) V (x 1 , . . ., for some renormalization constant c such that D e −2V (x) dx = 1 and some positive constant A.

Invariant probability measure
As explained above, the main advantage of considering the process X ǫ under the modified law P ε * x is that we can find an IPM.More precisely Lemma 2.1.The process X ε satisfies: 1)For each function f ∈ L 1 ( D × Ω; P * D ) and t ≥ 0: 2) For each function f ∈ L 1 (∂D × Ω; P * ∂D ) and t ≥ 0: The first relation (13) results from the structure of L ǫ V (see (7)), which has been defined so as to make e −2V (x) dx invariant for the process X ǫ .Once (13) established, ( 14) is derived from the fact that K ǫ is the density of occupation time of the process X ǫ at the boundary ∂D.

Ergodic problems
The next step is to determine the asymptotic behaviour as ǫ → 0 of the quantities (15 The behaviour of each above quantity is related to the evolution of the process X ǫ respectively inside the domain D and near the boundary ∂D.We shall see that both limits can be identified by solving ergodic problems associated to appropriate resolvent families.What concerns the first functional has already been investigated in the literature.The main novelty of the following section is the boundary ergodic problems associated to the second functional.

Ergodic problems associated to the diffusion process inside D
First we have to figure out what happens when the process X ǫ evolves inside the domain D. In that case, the pushing of the local time in (1) vanishes.The process X ǫ is thus driven by the same effects as in the stationary case (without reflection term).The ergodic properties of the process inside D are therefore the same as in the classical situation.So we just sum up the main results and give references for further details.Notations: For p ∈ [1; ∞], L p (Ω) denotes the standard space of p-th power integrable functions (essentially bounded functions if p = ∞) on (Ω, G, µ) and | • | p the corresponding norm.If p = 2, the associated inner product is denoted by ( ) denotes the space of smooth functions on D with compact support in D (resp.D).
Standard background: The operators on L 2 (Ω) defined by T x g(ω) = g(τ x ω) form a strongly continuous group of unitary maps in L 2 (Ω).Let (e 1 , . . ., e d ) stand for the canonical basis of R d .The group (T x ) x possesses d generators defined by D i g = lim h∈R→0 h −1 (T hei g − g), for i = 1, . . ., d, whenever the limit exists in the L 2 (Ω)-sense.The operators (D i ) i are closed and densely defined.Given ϕ ∈ d i=1 Dom(D i ), Dϕ stands for the d-dimensional vector whose entries are D i ϕ for i = 1, . . ., d.
We point out that we distinguish D i from the usual differential operator ∂ xi acting on differentiable functions f : . However, it is straightforward to check that, whenever a function ϕ ∈ Dom(D) possesses differentiable trajectories (i.e.µ a.s. the mapping We denote by C the dense subspace of L 2 (Ω) defined by ( 16) Basically, C stands for the space of smooth functions on the random medium.We have C ⊂ Dom(D i ) and We associate to the operator L ε (Eq.( 4)) an unbounded operator acting on C ⊂ L 2 (Ω) Following [7, Ch. 3, Sect 3.] (see also [19,Sect. 4]), we can consider its Friedrich extension, still denoted by L, which is a self-adjoint operator on L 2 (Ω).The domain H of the corresponding Dirichlet form can be described as the closure of C with respect to the norm ϕ 2 or the weak formulation equation Moreover, the resolvent operator U λ satisifes the maximum principle: The ergodic properties of the operator L are summarized in the following proposition:

Boundary ergodic problems
Second, we have to figure out what happens when the process hits the boundary ∂D.If we want to adapt the arguments in [22], it seems natural to look at the unbounded operator in random medium H γ , whose construction is formally the following: given ω ∈ Ω and a smooth function ϕ ∈ C, let us denote by ũω : D → R the solution of the problem where the operator L ω is defined by ( 21) Remark.Choose ǫ = 1 in (1) and denote by (X 1 , K 1 ) the solution of (1).The operator H γ is actually the generator of the Ω-valued Markov process and the function K −1 stands for the left inverse of The process Z describes the environment as seen from the particle whenever the process X 1 hits the boundary ∂D.
The main difficulty lies in constructing a unique solution of Problem (20) with suitable growth and integrability properties because of the lack of compactness of D. This point together with the lack of IPM are known as the major difficulties in homogenizing the Neumann problem in random media.We detail below the construction of H γ through its resolvent family.In spite of its technical aspect, this contruction seems to be the right one because it exhibits a lack of stationarity along the e 1 -direction, which is intrinsec to the problem due to the pushing of the local time K ǫ , and conserves the stationarity of the problem along all other directions.
First we give a few notations before tackling the construction of H γ .In what follows, the notation (x 1 , y) stands for a d-dimensional vector, where the first component x 1 belongs to R (eventually R + = [0; +∞)) and the second component y belongs to R d−1 .To define an unbounded operator, we first need to determine the space that it acts on.As explained above, that space must exhibit a a lack of stationarity along the e 1 -direction and stationarity along all other directions.So the natural space to look at is the product space R + × Ω, denoted by Ω + , equipped with the measure dµ + def = dx 1 ⊗ dµ where dx 1 is the Lebesgue measure on R + .We can then consider the standard spaces L p (Ω + ) for p ∈ [1; +∞].
Our strategy is to define the Dirichlet form associated to H γ .To that purpose, we need to define a dense space of test functions on Ω + and a symmetric bilinear form acting on the test functions.It is natural to define the space of test functions by Among the test functions we distinguish those that are vanishing on the boundary Before tackling the construction of the symmetric bilinear form, we also need to introduce some elementary tools of differential calculus on Ω + .For any g ∈ C(Ω + ), we introduce a sort of gradient ∂g of g.If g ∈ C(Ω + ) takes on the form ρ(x 1 )ϕ(ω) for some ρ ∈ C ∞ c ([0; +∞)) and ϕ ∈ C, the entries of ∂g are given by and We define on C(Ω + ) the norm ( 23) which is a sort of Sobolev norm on Ω + , and W 1 as the closure of C(Ω + ) with respect to the norm N (W 1 is thus an analog of Sobolev spaces on Ω + ).Obviously, the mapping is continuous (with norm equal to 1) and stands, in a way, for the trace operator on Ω + .Equip the topological dual space (W 1 ) ′ of W 1 with the dual norm N ′ .The adjoint P * of P is given by ′ where the mapping P * (ϕ) exactly matches To sum up, we have constructed a space of test functions C(Ω + ), which is dense in W 1 for the norm N , and a trace operator on W 1 .We further stress that a function g ∈ W 1 satisfies ∂g = 0 if and only if we have g(x 1 , ω) = f (ω) on Ω + for some function f ∈ L 2 (Ω) invariant under the translations {τ x ; x ∈ {0} × R d−1 }.For that reason, we introduce the σ-field G * ⊂ G generated by the subsets of Ω that are invariant under the translations {τ x ; x ∈ {0} × R d−1 }, and the conditional expectation M 1 with respect to G * .
We now focus on the construction of the symmetric bilinear form and the resolvent family associated to H γ .For each random function ϕ defined on Ω, we associate a function ϕ Hence, we can associate to the random matrix a (defined in Section 1) the corresponding matrixvalued function a + defined on Ω + .Then, for any λ > 0, we define on W 1 × W 1 the following symmetric bilinear form From ( 5), it is readily seen that it is continuous and coercive on W 1 ×W 1 .From the Lax-Milgram theorem, it thus defines a continuous resolvent family G λ : (W 1 ) ′ → W 1 such that: For each λ > 0, we then define the operator .
Given ϕ ∈ L 2 (Ω), we can plug F = P * ϕ into (25) and we get that is, by using (24): The following proposition summarizes the main properties of the operators (R λ ) λ>0 , and in particular their ergodic properties: Proposition 2.4.The family (R λ ) λ is a strongly continuous resolvent family, and: 1) the operator R λ is self-adjoint.2) given ϕ ∈ L 2 (Ω) and λ > 0, we have: The remaining part of this section is concerned with the regularity properties of G λ P * ϕ.Proposition 2.5.Given ϕ ∈ C, the trajectories of G λ P * ϕ are smooth.More precisely, we can find N ⊂ Ω satisfying µ(N ) = 0 and such that ∀ω ∈ Ω \ N , the function belongs to C ∞ ( D).Furthermore, it is a classical solution to the problem: In particular, the above proposition proves that (R λ ) λ is the resolvent family associated to the operator H γ .This family also satisfies the maximum principle: Proposition 2.6.(Maximum principle).Given ϕ ∈ C and λ > 0, we have:

Ergodic theorems
As already explained, the ergodic problems that we have solved in the previous section lead to establishing ergodic theorems for the process X ǫ .The strategy of the proof is the following.First we work under Pε * to use the existence of the IPM (see Section 2.2).By adapting a classical scheme, we derive from Propositions 2.3 and 2.4 ergodic theorems under Pε * both for the process X ǫ and for the local time K ǫ : Theorem 2.7.For each function f ∈ L 1 (Ω) and T > 0, we have Finally we deduce that the above theorems remain valid under Pε thanks to (11).

Construction of the correctors
Even though we have established ergodic theorems, this is not enough to find the limit of equation ( 1) because of the highly oscillating term ε −1 b(τ X ε t /ε ω) dt.To get rid of this term, the ideal situation is to find a stationary solution u i : Ω → R to the equation ( 34) Then, by applying the Itô formula to the function u i , it is readily seen that the contribution of the term ε −1 b i (τ X ε t /ε ω) dt formally reduces to a stochastic integral and a functional of the local time, the limits of which can be handled with the ergodic theorems 2.9.
The problem is that the lack of compactness of a random medium makes you cannot find a stationary solution to (34).As already suggested in the literature, a good approach is to add some coercivity to the problem (34) and define, for i = 1, . . ., d and λ > 0, the solution u i λ of the resolvent equation ( 35) If we let λ go to 0 in of (35), the solution u i λ should provide a good approximation of the solution of (34).Actually, it is hopeless to prove the convergence of the family (u i λ ) λ in some L p (Ω)-space because, in great generality, there is no stationary L p (Ω)-solution to (34).However we can prove the convergence towards 0 of the term λu i λ and the convergence of the gradients Du i λ : Proposition 2.10.There exists As we shall see in Section 2.6, the above convergence is enough to carry out the homogenization procedure.The functions ζ i (i ≤ d) are involved in the expression of the coefficients of the homogenized equation (6).For that reason, we give some further qualitative description of these coefficients: where I denotes the d-dimensional identity matrix.Then Ā obeys the variational formula: Moreover, we have Ā ≥ ΛI (in the sense of symmetric nonnegative matrices) and the first component Γ1 of Γ satisfies Γ1 ≥ Λ.Finally, Γ coincides with the orthogonal projection In particular, we have established that the limiting equation ( 6) is not degenerate, namely that the diffusion coefficient Ā is invertible and that the pushing of the reflection term Γ along the normal to ∂D does not vanish.

Homogenization
Homogenizing (1) consists in proving that the couple of processes (X ǫ , K ǫ ) ǫ converges as ǫ → 0 (in the sense of Theorem 1.2) towards the couple of processes ( X, K) solution of the RSDE (6).We also remind the reader that, for the time being, we work with the function χ(x) = e −2V (x) .We shall see thereafter how the general case follows.
First we show that the family (X ǫ , K ǫ ) ǫ is compact in some appropriate topological space.Let us introduce the space D([0, T ]; R + ) of nonnegative right-continuous functions with left limits on [0, T ] equipped with the S-topology of Jakubowski (see Appendix F).The space C([0, T ]; D) is equipped with the sup-norm topology.We have: Proposition 2.12.Under the law Pε , the family of processes (X ε ) ε is tight in C([0, T ]; D), and the family of processes The main idea of the above result is originally due to Varadhan and is exposed in [16,Chap. 3] for stationary diffusions in random media.Roughly speaking, it combines exponential estimates for processes symmetric with respect to their IPM and the Garsia-Rodemich-Rumsey inequality.In our context, the pushing of the local time rises some further technical difficulties when the process X ǫ evolves near the boundary.Briefly, our strategy to prove Proposition 2.12 consists in applying the method [16,Chap. 3] when the process X ǫ evolves far from the boundary, say not closer to ∂D than a fixed distance θ, to obtain a first class of tightness estimates.Obviously, these estimates depend on θ.That dependence takes place in a penalty term related to the constraint of evolving far from the boundary.Then we let θ go to 0. The limit of the penalty term can be expressed in terms of the local time K ǫ in such a way that we get tightness estimates for the whole process X ǫ (wherever it evolves).Details are set out in the appendix G.
It then remains to identify each possible weak limit of the family (X ǫ , K ǫ ) ǫ .To that purpose, we introduce the corrector u λ ∈ L 2 (Ω; R d ), the entries of which are given, for j = 1, . . ., d, by the solution u j λ to the resolvent equation As explained in Section 2.5, the function u λ is used to get rid of the highly oscillating term ε −1 b(τ X ε t /ε ω) dt in (1) by appling the Itô formula.Indeed, since µ-almost surely the function φ : [8,Th. 6.17]) and we can apply the Itô formula to the function x → ǫu λ (τ x/ǫ ω).We obtain By summing the relations (40) and (1) and by setting λ = ǫ 2 , we deduce: So we make the term ε −1 b(τ X ε t /ε ω) dt disappear at the price of modifying the stochastic integral and the integral with respect to the local time.By using Theorem 2.9, we should be able to identify their respective limits.The corrective terms G 1,ε and G 2,ε should reduce to 0 as ǫ → 0. This is the purpose of the following proposition: Proposition 2.13.For each subsequence of the family (X ε , K ε ) ε , we can extract a subsequence (still indexed with ε > 0) such that: 1) under Pε , the family of processes and K is a right-continuous increasing process.
Finally we prove the convergence of (G 3,ε ) ε with the help of Theorem 2.9.Indeed, Proposition 2.10 ensures the convergence of the family (( Since the convergence of each term in (41) is now established, it remains to identify the limiting equation.From Theorem F.2, we can find a countable subset S ⊂ [0, T [ such that the finite-dimensional distributions of the process (X ε , M ε , K ε ) ε converge along [0, T ] \ S.So we can pass to the limit in (41) along s, t ∈ [0, T ] \ S (s < t), and this leads to Since ( 42) is valid for s, t ∈ [0, T ] \ S (note that this set is dense and contains T ) and since the processes are at least right continuous, (42) remains valid on the whole interval [0, T ].As a by-product, K is continuous and the convergence of (X ε , M ε , K ε ) ε actually holds in the space It remains to prove that K is associated to X in the sense of the Skorokhod problem, that is to establish that {Points of increase of K} ⊂ {t; X1 t = 0} or T 0 X1 r d Kr = 0.This results from the fact that ∀ε > 0 T 0 X 1,ε r dK ε r = 0 and Lemma F.4.Since uniqueness in law holds for the solution ( X, K) of Equation (42) (see [23]), we have proved that each converging subsequence of the family (X ε , K ε ) ε converges in law in C([0, T ]; D × R + ) as ε → 0 towards the same limit (the unique solution ( X, K) of ( 6)).As a consequence, under Pε , the whole sequence (X ε , K ε ) ε converges in law towards the couple ( X, K) solution of (6).

Replication method
Let us use the shorthands C D and C + to denote the spaces C([0, T ], D) and C([0, T ], R + ) respectively.Let Ē denote the expectation with respect to the law P of the process ( X, K) solving the RSDE (6) with initial distribution P( X0 ∈ dx) = e −2V (x) dx.From [23], the law P coincides with the averaged law D Px (•)e −2V (x) dx where Px denotes the law of ( X, K) solving (42) and starting from x ∈ D.
We sum up the results obtained previously.We have proved the convergence, as ε → 0, of Ēε [F (X ε , K ε )] towards Ē[F ( X, K)], for each continuous bounded function F : This convergence result is often called annealed because Ēε is the averaging of the law P ǫ x with respect to the probability measure P * D .In the classical framework of Brownian motion driven SDE in random media (i.e.without reflection term in (1)), it is plain to see that the annealed convergence of X ε towards a Brownian motion implies that, in P * D -probability, the law P ε x of X ε converges towards that of a Brownian motion.To put it simply, we can drop the averaging with respect to P * D to obtain a convergence in probability, which is a stronger result.Indeed, the convergence in law towards 0 of the correctors (by analogy, the terms G 1,ε , G 2,ε in (41)) implies their convergence in probability towards 0. Moreover the convergence in P * D -probability of the law of the martingale term M ε in ( 41) is obvious since we can apply [9] for P * D -almost every (x, ω) ∈ D × Ω.In our case, the additional term G 3,ε puts an end to that simplicity: this term converges, under the annealed law Pε , towards a random variable Γ K, but there is no obvious way to switch annealed convergence for convergence in probability.That is the purpose of the computations below.
Remark and open problem.The above remark also raises the open problem of proving a socalled quenched homogenization result, that is to prove the convergence of X ǫ towards a reflected Brownian motion for almost every realization ω of the environment and every starting point x ∈ D. The same arguments as above show that a quenched result should be much more difficult than in the stationary case [21].

So we have to establish the convergence in P
. By using a specific feature of Hilbert spaces, the convergence is established if we can prove the convergence of the norms as well as the weak convergence.Actually we only need to establish (43) because the weak convergence results from Section 2.6 as soon as ( 43) is established.
The following method is called replication technique because the above quadratic mean can be thought as of the mean of two independent copies of the couple (X ε , K ε ).We consider 2 independent Brownian motions (B 1 , B 2 ) and solve (1) for each Brownian motion.This provides two independant (with respect to the randomness of the Brownian motion) couples of processes (X ε,1 , K ε,1 ) and (X ε,2 , K ε,2 ).Furthermore, we have where E ε xx denotes the expectation with respect to the law P ε xx of the process (X ε,1 , K ε,1 , X ε,2 , K ε,2 ) when both X ε,1 and X ε,2 start from x ∈ D. Under M * D P ε xx , the results of subsections 2.3, 2.5 and Proposition 2.12 remain valid since the marginal laws of each couple of processes coincide with Pε x .So we can repeat the arguments of subsection 2.6 and prove that the processes where ( B1 , B2 ) is a standard 2d-dimensional Brownian motion and K1 , K2 are the local times respectively associated to X1 , X2 .Let P denote the law of ( X1 , K1 , X2 , K2 ) with initial distribution given by P ( X1 0 ∈ dx, X2 0 ∈ dy) = δ x (dy)e −2V (x) dx and Pxx the law of ( X1 , K1 , X2 , K2 ) solution of (44) where both X1 and X2 start from x ∈ D. To obtain (43), it just remains to remark that since, under Pxx , the couples ( X1 , K1 ) and ( X2 , K2 ) are adapted to the filtrations generated respectively by B1 and B2 and are therefore independent.

Conclusion
We have proved Theorem 1.2 for any function χ that can be rewritten as χ(x) = e −2V (x) , where V : D → R is defined in (12).It is then plain to see that Theorem 1.2 holds for any nonnegative function χ not greater than Ce −2V (x) , for some positive constant C and some function V of the type (12).Theorem 1.2 thus holds for any continuous function χ with compact support over D. Consider now a generic function χ : D → R + satisfying D χ(x) dx = 1 and χ ′ : D → R + with compact support in D. Let A ε ⊂ Ω × D be defined as |dx, in such a way that the Theorem 1.2 holds for χ by density arguments.The proof is completed.

Green's formula:
We remind the reader of the Green formula (see [15, eq. 6.5]).We consider the following operator acting on C 2 ( D) Note that the Lebesgue measure on D or ∂D is indistinctly denoted by dx since the domain of integration avoids confusion.

PDE results:
We also state some preliminary PDE results that we shall need in the forthcoming proofs: Proof.First of all, we remind the reader that all the coefficients involved in the operator L ε V belong to C ∞ b ( D).From [13, Th V.7.4], we can find a unique generalized solution w From [13, IV.10], we can prove that w ′ ε is smooth up to the boundary.Then the function b is a classical solution to the problem (47).
Lemma A.2.The solution w ε given by Lemma A.1 admits the following probabilistic representation: Proof.The proof relies on the Itô formula (see for instance [10, Ch.II, Th. 5.1] or [6, Ch. 2, Th.5.1]).It must be applied to the function (r, x, y) → w ε (t − r, x) exp(y) and to the triple of processes (r, X ε r , r 0 g(X ε u )du).Since it is a quite classical exercise, we let the reader check the details.

B Proofs of subsection 2.2
Proof of Lemma 2.1.1) Fix t > 0. First we suppose that we are given a deterministic function where L ε V is defined in (45).Moreover, Lemma A.2 provides the probabilistic representation: The Green formula (46) then yields It is readily seen that ( 48) also holds if we only assume that f is a bounded and continuous function over D: it suffices to consider a sequence (f n ) n ⊂ C ∞ c (D) converging pointwise towards f over D. Since f is bounded, we can assume that the sequence is uniformly bounded with respect to the sup-norm over D. Since (48) holds for f n , it just remain to pass to the limit as n → ∞ and apply the Lebesgue dominated convergence theorem.
We have proved that the measure e −2V (x) dx is invariant for the Markov process X ε (under P ǫ * ).Its semi-group thus uniquely extends to a contraction semi-group on L 1 ( D, e −2V (x) dx).
Let us now focus on the second assertion.As previously, it suffices to establish for some bounded continuous function f : ∂D → R. We can find a bounded continuous function f : D → R such that the restriction to ∂D coincides with f (choose for instance f = f • p where p : D → ∂D is the orthogonal projection along the first axis of coordinates).Recall now that the local time K ε t is the density of occupation time at ∂D (see [4,Prop. 1.19] with ψ(x) = x 1 , V 0 = γ and a 2 (x) = 1).Hence, by using (48),

C Proofs of subsection 2.3
Generator on the random medium associated to the diffusion process inside D Proof of Proposition 2.3.The first statement is a particular case, for instance, of [19,Lemma 6.2].
To follow the proof in [19], omit the dependency on the parameter y, take H = 0 and Ψ = f .To prove the second statement, choose ϕ = w λ in (19) and plug the relation /λ and the result follows.Proof of Lemma 2.2.The proof is quite similar to that of Proposition 2.6 below.So we let the reader check the details.

Generator on the random medium associated to the reflection term
Proof of Proposition (2.4).The resolvent properties of the family (R λ ) λ are readily derived from those of the family (G λ ) λ .
To establish the strong convergence, it suffices to prove the convergence of the norms.As a weak limit, φ satisfies the property | φ| 2 ≤ lim inf λ→0 |λR λ ϕ| 2 .Conversely, (49) yields lim sup and the strong convergence follows.
The remaining part of this section is concerned with the regularity properties of the operator G λ P * (Propositions 2.5 and 2.6) and may be omitted upon the first reading.Indeed, though they may appear a bit tedious, they are a direct adatation of existing results for the corresponding operators defined on D (not on Ω + ).However, since we cannot quote proper references, we give the details.
Given u ∈ L 2 (Ω + ), we shall say that u is a weakly differentiable if, for i = 1, . . ., d, we can find some function ∂ i u ∈ L 2 (Ω + ) such that, for any g ∈ C c (Ω + ): It is straightforward to check that a function u ∈ W 1 is weakly differentiable.For k ≥ 2, the space W k is recursively defined as the set of functions u ∈ W 1 such that ∂ i u is k − 1 times weakly differentiable for i = 1, . . ., d.
Proof of Proposition C.1.The strategy is based on the well-known method of difference quotients.Our proof, adapted to the context of random media, is based on [8,Sect. 7.11 & Th. 8.8].The properties of difference quotients in random media are summarized below (see e.g.[19,Sect. 5]): i) for j = 2, . . ., d, r ∈ R \ {0} and g ∈ C c (Ω + ), we define ii) for each r ∈ R \ {0} and g ∈ C c (Ω + ), we define iii) for any j = 1, . . ., d, r ∈ R \ {0} and g, h ∈ C c (Ω + ), the discrete integration by parts holds provided that r is small enough to ensure that ∆ j r g and ∆ j r h belong to C c (Ω + ).iv) for any j = 1, . . ., d, r ∈ R \ {0} and g ∈ C c (Ω + ) such that ∆ j r g ∈ C c (Ω + ), we have Up to the end of the proof, the function G λ P * ϕ is denoted by u.The strategy consists in differentiating the resolvent equation B λ (u, •) = (•, P * ϕ) to prove that the derivatives of u equations of the same type.For p = 2, . . ., d, it raises no difficulty to adapt the method explained in [19,Sect. 5] and prove that the "tangential derivatives" ∂ p u belongs to W 1 and solves the equation where In particular, ∂ ij u ∈ L 2 (Ω + ; µ + ) for (i, j) = (1, 1).We let the reader check the details.
The main difficulty lies in the "normal derivative" ∂ 1 u: we have to prove that ∂ 1 u is weakly differentiable.Actually, it just remains to prove that there exists a function To that purpose, we plug a generic function g ∈ C c (Ω + ) into the resolvent equation (28).The boundary terms (P * ϕ, g) = (ϕ, g(0, •)) 2 and λ(P u, P g) 2 vanish and we obtain: We isolate the term corresponding to i = 1 and j = 1 to obtain (remind that a 11 = 1) Since ∂ ij u ∈ L 2 (Ω + ; µ + ) for (i, j) = (1, 1), we deduce that for some positive constant C.So the mapping g ∈ C c (Ω + ) → Ω + ∂ 1 u∂ 1 g dµ + is L 2 (Ω + ; µ + )continuous and there exists a unique function denoted by ∂ 2  11 u such that (51) holds.As a consequence, ∂ 1 u is weakly differentiable, that is u ∈ W 2 .Note that (50) only involves the functions a, ϕ and their derivatives in such a way that we can iterate the argument in differentiating (50) and so on.So it is clear that the proof can be completed recursively.
Proof of Proposition 2.5.The function u still stands for G λ P * ϕ.From Proposition C.1, we have u ∈ ∞ k=1 W k and it is plain to deduce that µ a.s. the trajectories of u are smooth and (52) We let the reader check that point (it is a straightforward adaptation of the fact that an infinitely weakly differentiable function f : D → R is smooth).It remains to prove that ũω solves (29).To begin with, we state the following lemma Lemma C.2.For each function v ∈ W 1 , we define ṽω : (x 1 , y) ∈ D → v(x 1 , τ (0,y) ω).Then for every ̺ ∈ C ∞ c ( D) and ψ ∈ C we have: where the function ψ * ̺ : Ω + → R belongs to W 1 and is defined by: Then, by using Lemma C.2 and the above relation, we obtain In the above expression, we replace Lv λ by λv λ − f , multiply both sides of the equality by ε 2 and isolate the term f (τ X ε t /ε ω) dt.We obtain Let us investigate the quantities ∆ 1,ε,λ , ∆ 2,ε,λ , ∆ 3,ε,λ , ∆ 4,ε,λ and ∆ 5,ε,λ .Using the Doob inequality and Lemma 2.1, we have: for some positive constant C only depending on T and |σ| ∞ .Hence Ēε * sup 0≤t≤T |∆ 1,ε,λ t | 2 → 0 as ε → 0, for each fixed λ > 0. Similarly, by using the boundedness of a, γ, ∂ x V , we can prove By taking the lim sup ǫ→0 in (56) and by using the convergences of ∆ 1,ε,λ , ∆ 2,ε,λ , ∆ 4,ε,λ , ∆ it is enough to consider the case M 1 [f ] = 0. Let us define, for any λ > 0, u λ = G λ P * f and f λ = R λ f , the definitions of which are given in Section 2.3 (boundary ergodic problems).We still use the notation ũλ ω (x) = u λ (x 1 , τ (0,y) ω) for any x = (x 1 , y) ∈ D. We remind the reader that the main regularity properties of the function ũλ ω are summarized in Proposition 2.5.In particular, µ a.s., the mapping x → ũλ ω (x) is smooth and we can apply the Itô formula: In the above expression, we use the relation Hence, (57) yields The next step of the proof is to prove that ∆ 1,ε , ∆ 2,ε , ∆ 3,ε converge to 0 as ε goes to 0 for each fixed λ > 0. Clearly, from Proposition 2.6, we have Let us now focus on ∆ 2,ε t .We use the boundedness of ∂ xj V, a ij (1 ≤ i, j ≤ d) and Lemma 2.1: We point out that the function V given by ( 12 By using Lemma 2.1 in the right-hand side of the previous inequality, we deduce, for any λ > 0, From Proposition 2.4 item 3, we can choose λ small enough so as to make the latter term arbitrarily small.So we complete the proof.Proof of Theorem 2.9. 1) From ( 11), we only have to check that (32) holds under Pε * .This follows from Theorem 2.7 and the estimate (obtained with Lemma 2.1) The same argument holds for (33).(39).Fix X ∈ R d whose entries are denoted by (X i ) 1≤i≤d .We have: Conversely, from Lemma E.1 below, we have: The above relation and Lemma E.1 again yield M[(X + ζX) * a(Dϕ − ζX)] = 0 for any ϕ ∈ C. So, for every ϕ ∈ C, we have: so that (39) follows.By the way, (61) proves that Ā also matches M[(I + ζ * )a].Now we prove ΛI ≤ Ā. Fix X ∈ R d .From ( 5) and Cauchy-Schwarz's inequality, we get , the latter quantity is equal to 0 (Lemma E.1) and we complete the proof.Note that the above computations also prove: Γ1 = M[(e 1 + ζe 1 ) * ae 1 ] = Ā11 ≥ Λ.
Lemma E.1.The following relation holds: Proof.Since b i = 1 2 D k a ik , the weak form of the resolvent equation ( 19) associated to f = b i reads, for any ψ ∈ H: By letting λ go to 0 and by using (36), we obtain: The result follows by linearity.
Lemma E.2.The projection operator M 1 saisfies the following elementary properties: Proof.The properties i) and ii) are easily derived from the identities M iii) results from i).Details are left to the reader.

F J-topology
We summarized below the main properties of the Jakubowski topology (J-topology) on the space D([0, T ]; R) (set of functions that are right-continuous with left-limits on [0, T ]) and refer the reader to [11] for further details and proofs.We denote by V the set of functions v : [0, T ] → R with bounded variations.The J-topology is a sequential topology defined by r) as n → +∞.By gathering [11,Th. 3.8] and [11,Th. 3.10], one can state: Theorem F.2. Let (V α ) α ⊂ D([0, T ]; R) be a family of nondecreasing stochastic processes.Suppose that the family (V α (T )) α is tight.Then the family (V α ) α is tight for the J-topology.Moreover, there exists a sequence (V n ) n ⊂ (V α ) α , a nondecreasing right-continuous process V 0 and a countable subset C ⊂ [0, T [ such that for all finite sequence (t 1 , . . ., t p ) ⊂ [0, T ] \ C, the family (V n (t 1 ), . . ., V n (t p )) n converges in law towards (V 0 (t 1 ), . . ., V 0 (t p )) n .
Equip the set V + c ([0, T ]; R) of continuous nondecreasing functions on [0, T ] with the J-topology and C([0, T ]; R) with the sup-norm topology.We claim: Proof.This results from Corollary 2.9 in [11] and the Dini theorem.
Lemma F.4.The following mapping is continuous Proof.This results from Lemma F.3 and the continuity of the mapping where both C([0, T ]; R) and V + c ([0, T ]; R) are equipped with the sup-norm topology.The reader may find a proof of the continuity of the above mapping in the proof of Lemma 3.3 in [17] (remark that, in [17], the S-topology coincides on C([0, T ]; R) with the sup-norm topology).
for some constant C that only depends on T, Λ.
We easily deduce the proof of Proposition 2.12: we first work under Pǫ * .Let us investigate the tightness of X ε .Observe that The tightness of the martingale part follows from the boundedness of σ and the Kolmogorov criterion.The tightness of the remaining terms results from Proposition G.2.So X ǫ is tight under Pǫ * .
We have thus shown that the proof of Proposition (2.12) boils down to establishing (66).So we now focus on the proof of (66).We want to adapt the arguments of [16,Chap. 3].However, the situation is more complicated due to the pushing of the local time when X ǫ is located on the boundary ∂D.Our idea is to eliminate the boundary effects by considering first a truncated drift vanishing near the boundary: fix ω ∈ Ω and a smooth function ρ ∈ C ∞ b ( D) satisfying ρ(x) = 0 whenever x 1 ≤ θ for some θ > 0. For any ε > 0 and j = 1, . . ., d, define the "truncated" drift (68) b ε ρ,j (x, ω) = e 2V (x) 2 ∂ xi e −2V (x) a ij (τ x/ε ω)ρ(x) , which belongs to C ∞ b ( D).Our strategy is the following: we derive exponential bounds for the process t 0 b ε ρ,j (X ε r , ω) dr.These estimates will depend on ρ.Then we shall prove that we can choose an appropriate sequence (ρ n ) n ⊂ C ∞ b ( D) preserving the exponential bounds and such that the sequence t 0 b ε ρn,j (X ε r , ω) dr n converges as n → ∞ towards the process involved in (66).The exponential bounds are derived from a proper spectral gap of the operator L ε V • +κb ε ρ,j • with boundary condition γ i (τ x/ε ω)∂ xi • = 0 on ∂D.The particular truncation we choose in (68) is fundamental to establish such a spectral gap because it preserves the "divergence structure" of the problem.Any other (and maybe more natural) truncation fails to have satisfactory spectral properties.
So we define the set Since |u ε,κ ω (0, •)| 2 D = 1, we complete the proof with the Gronwall lemma.Proposition G.4.For any κ > 0, ε > 0 and 0 ≤ s, t ≤ T Ēε * exp κ As explained above, we can replace ρ in Proposition G.4 with an appropriate sequence (ρ n ) n ⊂ C ∞ b ( D) so as to make the sequence t 0 b ε ρn,j (X ε r , ω) dr n converging as n → ∞ towards the process involved in (66).Let us construct such a sequence.For each n ∈ N * , let us consider the piecewise affine function ρ n : D → R defined by: for some constant C only depending on Λ.Now it remains to pass to the limit as n → ∞ in (73).

0 a 1 n ; 2 n
1 otherwise.Note that ρ n is continuous and sup x∈ D |ρ n (x)| ≤ 1.With the help of a regularization procedure and Lemma 2.1, one can prove that Proposition G.4 remains valid for ρ n instead of ρ, where τX ε r /ε ω) − ∂ xi V (X ε r )a ij (τ X ε r /ε ω) ρ n (X ε r ) dr + t ij (τ X ε r /ε ω)n1I [ ] (X ε r ) dr.(72)You can obtain the latter expression by expanding (68) with respect to the operator ∂ xi .Since sup x∈ D |ρ n (x)| = 1 for each n, we deduce (73) ∀n ∈ N, ∀0 ≤ s, t ≤ T, Ēε * exp κ t s b ε ρn,j (X ε r , ω) dr ≤ 2 exp Cκ 2 (t − s) Ēε * |λv λ (τ X ε r /ε ω)| dr = T |λv λ | 1 ≤ T |λv λ | 2 .From Proposition 2.3, we have |λv λ | 2 → 0 as λ goes to 0. So it just remains to choose λ small enough to complete the proof in the case of a smooth function f ∈ C. The general case follows from the density of C in L 1 (Ω) and Lemma 2.1.Proof of Theorem 2.8.Once again, from Lemma 2.1 and density arguments, it is sufficient to consider the case of a smooth function f ∈ C.Even if it means replacing Proof of Proposition 2.10.The statement (36) is quite classical.The reader is referred to [16, Ch. 2] for an insight of the method and to [19, Prop.4.3] for a proof in a more general context.Proof of Proposition 2.11.In what follows, for each i = 1, • • • , d, (ϕ i n ) n stands for a sequence in C such that Dϕ i n → ζ i in L 2 (Ω) d as n → +∞.Let us first focus on We have the following estimate of the modulus of continuity