A NOTE ON REFLECTING BROWNIAN MOTIONS

We give another proof of the following result from a joint paper with B(cid:19)alint T(cid:19)oth: \A Brownian motion reﬂected on an independent time-reversed Brownian motion is a Brownian motion". it was shown that Brownian motion reﬂected on an independent time-reversed Brownian motion is again a Brownian motion" combining the that for random walks reﬂected random walks, of random towards Brownian motion). a


Introduction.
In the paper [STW], it was shown that "a Brownian motion reflected on an independent time-reversed Brownian motion is again a Brownian motion" combining the fact that the analogous statement holds for simple random walks reflected on simple random walks, and the invariance principle (i.e., convergence of rescaled simple random walk towards Brownian motion). The purpose of the present short note is to present a direct proof in a continuous setting that does not rely on simple random walks. One motivation for the paper [STW] was the relation between families of coalescing/reflecting Brownian motions with the "true self-repelling motion", a one-dimensional self-repelling continuous process constructed in [TW]. The present approach seems to show that it is possible to compute explicitly laws of various quantities related to this process. In order to state the result more precisely, we need to introduce some notation. If f and g are two real-valued continuous functions defined on [0, 1] with g(0) > f(0), define the forward reflection of f on g as (where x + = max(x, 0) and x − = max(−x, 0)). Note that f g ≤ g and that f g − f is constant on every interval where f g = g. These two properties in fact characterize f g . Similarly, if g(0) < f(0), we can also define the forward reflection of f on g by Loosely speaking, the function f g has the same increments as f as long as it stays away from g, but it gets a push that prevents it from crossing g whenever it hits g (it is an upwards push if f g ≥ g and a downwards push if f g ≤ g).
Similarly, one can define backwards reflection by letting the time run backwards. For instance, if f (1) < g(1), the backward reflection of g on f is defined by Suppose now that (U t , V t ) t∈[0,1] is a pair of independent processes such that U is a Brownian motion started from x 0 and (V 1−t ) t∈[0,1] is a Brownian motion started from y 1 (in short: V is a time-reversed Brownian motion started from V 1 = y 1 ). Define Note thatX and Y are almost surely defined as P (V 0 = x 0 ) = P (U 1 = y 1 ) = 0. The above-mentioned result from [STW] can be stated as follows:
In particular,X = U V has the same law as U . Proposition 1 was extended to families of coalescing and reflecting Brownian motions in [STW] (motivated by the relation with [TW]).
In [S], it is extended to conjugate diffusions. In both cases, discrete approximations are used. The approach developed in the present paper does not seem to be well-suited for such generalizations. The plan of the paper is the following. First, we show that the laws of (X 1 , Y 0 ) and of (X 1 ,Ỹ 0 are identical by computing this law explicitly. Then, we use this to show that the finitedimensional marginals of (X, Y ) and (X,Ỹ ) are also equal, and that Proposition 1 therefore holds. Finally, we give a description of the conditional law of (X, Y ) given (X 1 , Y 0 ) in terms of Brownian bridges. This is also closely related to time-reversal of reflected two-dimensional Brownian motion in a half-plane, as pointed out in [BN] and briefly recalled in Section 4.

The law of the end-points.
Define U, V, X, Y,X,Ỹ as in the introduction. We are first going to focus on the laws of the endpoints.

Lemma 1
The two pairs of random variables (X 1 , Y 0 ) and (X 1 ,Ỹ 0 ) have the same law.
Proof. Note that P (V 0 > x 0 ) = P (U 1 < y 1 ) and that these two events correspond (up to events of zero probability) to {Ỹ ≥X} and {Y ≥ X} respectively. Combining this with a symmetry argument shows that in order to derive the lemma, it suffices to show that 1 U1<y1 (X 1 , Y 0 ) and 1 V0>x0 (X 1 ,Ỹ 0 ) have the same law. Define Recall that when U 1 < y 1 , then Y = V + L so that L is the local time process of Y on X (the interpretation of L as a local time process is discussed in [BN]). Similarly, up to events of zero probability. Hence, in order to prove Lemma 1, it is sufficient to show that the conditional distribution of (X 1 , Y 0 , L 0 ) given {U 1 < y 1 , L 0 > 0} and the conditional distribution of (X 1 ,Ỹ 0 ,L 1 ) given {V 0 > x 0 ,L 1 > 0} are identical.
We now compute explicitly these laws. Let us first condition U and V by the values of U 1 = x 1 and V 0 = v 0 where x 1 < y 1 and v 0 ∈ R. The processes U and V are now two independent Brownian bridges. According to (4), Recall that the reflection principle shows that if B denotes the value of a Brownian motion at time 2, and S its maximum up to the same time, then for all s > 0 and b < s, Hence, the joint density of (S, B) is Consequently, for each fixed b, the density of the maximum of the bridge from 0 to b is equal to the above expression multiplied by the renormalizing factor It follows that the supremum of a bridge with quadratic variation 2dt from a to b has density Hence, the density of L 0 , conditionally on L 0 > 0, It therefore follows that the joint density of ( Using Y 0 = V 0 + L 0 , straightforward computations yield that the joint density of (X 1 , Y 0 , L 0 ) on {L 0 > 0, X 1 < y 1 } is proportional to The same method can be applied to compute the joint law of (X 1 ,Ỹ 0 ,L 1 ), and one then checks that on the event {L 1 > 0, V 0 > x 0 }, one gets exactly the same density, which proves Lemma 1.

Finite-dimensional marginals.
We are now going to deduce Proposition 1 from Lemma 1. The laws of the continuous processes X and Y are determined by their finitedimensional marginals. Let therefore t 0 = 0 < t 1 < · · · < t k = 1 be fixed. Define for all j ∈ {0, . . . , k}, the processes X (j) and Y (j) as follows. On [0, t j ], X (j) = U and on [t j , 1], Y (j) = V . Then, on [t j , 1], X (j) is obtained by reflecting U on V , and on [0, t j ], Y (j) is obtained by reflecting V (backwards) on U . Note that (X (0) , Y (0) ) = (X,Ỹ ) and that (X (k) , Y (k) ) = (X, Y ). We now fix j < k − 1. Let us first compare the laws of (X j [0, t j+1 ], Y j [t j , 1]) and of (X j+1 [0, , 1]. If we condition on U [0, t j ] and V [t j+1 , 1], then on the interval [t j , t j+1 ]: • Y j is a backward Brownian motion started from Y j (t j+1 ) = V (t j+1 ) and X j is a forward Brownian motion started from X j (t j ) = U (t j ), reflected on Y j .
• X j+1 is a forward Brownian motion started from X j+1 (t j ) = U (t j ), and Y j is a backward Brownian motion started from Y j (t j+1 ) = V (t j+1 ) reflected on X j+1 .
But Lemma 1 (appropriately scaled) then precisely shows that the conditional laws of (X j (t j+1 ), Y j (t j )) and of (X j+1 (t j+1 ), Y j+1 (t j )) given (U [0, t j ], V [t j+1 , 1]) are identical. Finally, note that for both i = j and i = j+1, is that of a forward Brownian motion started from X i (t j+1 ) and reflected on V [t j+1 , 1]. Hence, putting the pieces together, we see that the laws of for i = j and for i = j + 1 are identical. In particular, the laws of are identical for i = j and i = j + 1. Hence, they also coincide for i = 0 and i = k and Proposition 1 follows.

Two-dimensional interpretation.
We first briefly recall an observation due to [BN] that relates Brownian motion reflected on Brownian motion to two-dimensional Brownian motion in a half-plane with oblique reflection. Suppose for the moment that (α t , β t ) t≥0 is a two-dimensional Brownian motion in the upper half-plane, with orthogonal reflection on the horizontal axis. In other words, α is an ordinary Brownian motion and β is a reflected Brownian motion independent of α (here and in the sequel, reflected Brownian motion -without further details -means one-dimensional Brownian motion reflected on the zero function, or equivalently the absolute value of a Brownian motion). We suppose for the moment that α 0 = β 0 = 0. Let L = L(β) denote the local time at zero of β normalized in such a way that γ := β − L is a Brownian motion. Recall that −L t = inf{γ s : s ≤ t}.
Let θ ∈ (0, π) be fixed and put λ = cot θ. The process δ θ := (δ θ t ) 0≤t≤T := (α t + λL t , β t ) 0≤t≤T is a Brownian motion in the upper half-plane reflected on the real line with reflection angle θ. It is not difficult to see that the time-reversal of this process is Brownian motion reflected on the real line with reflection angle π − θ. Let T denote the first time at which L t hits one. Then the law ofδ θ := (δ θ T −t − δ θ T ) 0≤t≤T is identical to that of δ π−θ (see [BN]). Let us now focus on the special case λ = 1, i.e. θ = π/4. Define and Note that (X t , 0 ≤ t ≤ T ) is a Brownian motion and that Y ≥ X. Moreover, Y is the sum of a Brownian motion independent of X and of the non-decreasing continuous process √ 2L t which increases only when Y = X. Hence, as observed in [BN], (Y t , 0 ≤ t ≤ T ) is a Brownian motion reflected on X, as defined in the introduction. The same argument applied to the reversed process shows thatŶ = (Y T −t − Y T ) 0≤t≤T is a Brownian motion, and thatX = (X T −t − X T ) 0≤t≤T is a Brownian motion reflected onŶ (reflected "downwards"). In other words, (X t , Y t ) 0≤t≤T and (−Ŷ t , −X t ) 0≤t≤T have the same law. This is reminiscent of Proposition 1, but here, the time at which time-reversal takes place is the stopping time T . This is a simplification since X T = Y T , so that there is no need to condition on the value of Y T − X T . Also, both X and Y are here "forward" processes, so that the filtrations do not mix up in a complicated way.
In Proposition 1, we consider simultaneously one forward process and one backward process. In order to adapt the previous approach when T = 1 is deterministic, it is natural to condition on the values of the end-points, transforming thus the Brownian motions into Brownian bridges, which can be seen as "forward" as well as "backward" processes. We now briefly indicate how one can adapt this two-dimensional argument to our setup. 5. The conditional law of (X, Y ). The proof of Lemma 1 gives the explicit expression (5) of the law of (X 1 , Y 0 , L 0 ) and also shows that it is possible to describe the conditional law of (U, V ) given X 1 = x 1 , Y 0 = y 0 and L 0 = when x 0 < y 0 , x 1 < y 1 and > 0 as follows: Define two processes α and γ by Then, α and γ are conditionally independent and their conditional laws are given by: • γ is a Brownian bridge from (−x 0 + y 0 − )/ √ 2 to (−x 1 + y 1 )/ √ 2 conditioned by the value of its minimum − / √ 2.
(recall that the projections of a two-dimensional Brownian bridge on an orthogonal basis of the plane are two independent one-dimensional Brownian bridges). See e.g. [PY] for the fact that all these conditionings make sense. Then, X and Y are determined by α and γ as We now want to describe the conditional laws of X and Y in a more symmetric way. Define If γ were an unconditioned backward Brownian motion, then β would have been a backward reflected Brownian motion (i.e. β 1−t would have had the law of the absolute value of a Brownian motion). Note also that the correspondence between β and γ is one-to-one and that max [0,1] γ − can be interpreted as the local time of β at level 0 on the time-interval [0, 1]. We denote by A the local time measure of β at level zero in order to distinguish it from the local time L in the previous sections (they differ by a scaling factor √ 2). Then, the (conditional) law of β is that of a backward reflected Brownian motion started from β 1 = (y 1 − x 1 )/ √ 2 conditioned by A[0, 1] = / √ 2 and by β 0 = (y 0 − x 0 )/ √ 2. Since the time-reversal of a Brownian bridge is also a Brownian bridge, and since the total local time at level zero is a deterministic functional of β that is invariant under time-reversal, it follows that β can also be viewed as a forward reflected Brownian motion started from β 0 = (y 0 − x 0 )/ √ 2, conditioned by β 1 = (y 1 − x 1 )/ √ 2 and A[0, 1] = / √ 2. Then, we get that