Attractors and Expansion for Brownian Flows

We show that a stochastic flow which is generated by a stochastic differential equation on $\R^d$ with bounded volatility has a random attractor provided that the drift component in the direction towards the origin is larger than a certain strictly positive constant $\beta$ outside a large ball. Using a similar approach, we provide a lower bound for the linear growth rate of the inner radius of the image of a large ball under a stochastic flow in case the drift component in the direction away from the origin is larger than a certain strictly positive constant $\beta$ outside a large ball. To prove the main result we use chaining techniques in order to control the growth of the diameter of subsets of the state space under the flow.


Introduction
It has been suggested that stochastic flows can be used as a model for studying the spread of passive tracers within a turbulent fluid. The individual particles (one-point motions) perform diffusions while the motions of adjacent particles are correlated in order to form a stochastic flow of homeomorphisms. Infinitesimally the flow is governed by a stochastic field of continuous semimartingales F (t, x) via a stochastic differential equation (SDE) of Kunita-type φ s,t (x) = x + t s F ( du, φ s,u (x)).
We will be interested in questions concerning the asymptotics of these flows. We aim at conditions for the existence of a random (pullback) attractor.
One might expect that an SDE with bounded and Lipschitz diffusion coefficient and Lipschitz continuous drift b whose component in the direction of the origin is positive and bounded away from zero outside a large ball should have a random attractor, i.e. large balls should contract under the solution (semi-)flow generated by the SDE and converge (in an appropriate sense) towards a stationary process taking values in the space of compact subsets of the state space R d It is clear that under the conditions above, the drift is sufficiently strong to push individual trajectories towards the origin when they are far away but it may happen that the drift is not sufficiently strong to push all trajectories starting far away towards the origin. In fact it may happen that a non-empty set of initial conditions (depending on the future of the driving noise process) will move towards infinity against the drift (with linear speed). Our main result will show however that there exists a number β 0 which is strictly positive in case d ≥ 2 and zero for d = 1 and which depends on the parameters of the SDE such that if the component of the drift b in the direction of the origin is larger than β 0 outside a large ball, then all trajectories are attracted towards the origin and a random attractor (in the pullback sense) exists.
Our main result, Theorem 3.1, contains a second statement which is in some sense dual to the first: if the component β of the drift b in the direction away from the origin is larger than β 0 , then large balls are very likely to expand in the sense that each compact set will eventually be contained in (or swallowed by) the image of the ball under the flow meaning that the probability that this does not happen decreases to zero as the radius of the initial ball goes to infinity. In fact the results are even stronger: the speed of expansion is (at least) linear with rate at least β − β 0 . Similarly, we show that the speed of attraction in the first result is also at least linear.
To prove the results, we provide bounds on the one-point motion of the solutions (these are fairly standard) and also estimates for the two-point motion (which are not as standard) which are needed to apply chaining methods in order to control the growth of sets (often small balls) under the action of the solution flow. We will be more explicit about our strategy in Section 4.
The paper is organized as follows: in the next section we provide the set-up and some basic definitions. In Section 3 we state the main result. Section 4 contains the proof. In the Appendix, we collect some auxiliary results.

Set-up and preliminaries
Let (Ω, F, (F t ) t≥0 , P) be a filtered probability space satisfying the usual conditions. On this probability space we define a jointly continuous martingale field M (t, x, ω) : We assume that its joint quadratic variation is of the form is a Gaussian field and t → M (t, x) is a Brownian motion (up to a linear transformation) for each x. Further, we let b : R d → R d be a (drift) vector field. We consider a stochastic flow generated via a stochastic differential equation (SDE) of Kunita type, see [9] We abbreviate A(x, y) := a(x, x) − a(x, y) − a(y, x) + a(y, y). Observe that We impose the following Lipschitz-type condition: Condition (A1) There are constants λ ≥ 0 and σ L > 0 such that for all x, y ∈ R d we have Here | · | denotes the Euclidean norm in R d and || · || the operator norm for d × d matrices. It is essentially well-known, that under assumptions (A1), the SDE (2.1) has a unique solution for each x and s. Indeed this follows from a straightforward modification of Theorem 3.4.6 in [9] (this theorem requires a linear growth condition of the form |b(x) · x| ≤ c(1 + |x| 2 ) but really only uses an estimate of the form b(x) · x ≤ c(1 + |x| 2 ) which is an easy consequence of Condition (A1)). Further, Theorem 4.7.1 in [9] shows that equation (2.1) generates a stochastic flow of local homeomorphisms (defined in [9], p.177). Theorem 5.1 together with Lemma 5.3 shows that (a modification of) this flow is actually strongly complete (or strictly complete) i. e. φ : Additionally, φ has stationary and independent increments and is therefore called a (timehomogeneous) Brownian flow. φ can be uniquely (in law) extended to −∞ < s ≤ t < ∞ in such a way that stationarity, independent increments and properties (i), (ii), (iii) and (iv) are preserved. In this case we will call φ the flow generated by the SDE (2.1). Note that -in general -φ s,t (ω) is not onto (not even in the deterministic case). Assumption (A1) allows for example for a drift b(x) = −|x| 2 x whose solution flow is not onto.
Observe that flows generated via SDEs driven by finitely many independent Brownian motions We will now formulate additional conditions which we will use in our main result.
For a given value of β ∈ R, the following conditions require that the component of the drift in the radial direction is asymptotically bounded from above respectively below by β.

Attractors
In this section we give a brief introduction to the concept of a random attractor. Let (E, d) be a Polish (i.e. a separable complete metric) space and let E be its Borel σ-algebra.
Definition 2.1. (a) Ω, F 0 , P, (θ t ) t∈R is called a metric dynamical system (MDS), if Ω, F 0 , P is a probability space and the family of mappings {θ t : Ω → Ω | t ∈ R} satisfies (iv) for each t ∈ R, θ t preserves the measure P.
The following definition is due to Crauel and Flandoli, see [8].
Definition 2.2. Let ϕ be an RDS on E over the MDS (Ω, F 0 , (θ t ) t∈R , P). The random set A(ω) is called an attractor for ϕ if (a) A(ω) is a random element in the metric space of nonempty compact subsets of E equipped with the Hausdorff distance.
Remark. Attractors as in the previous definition are often called pullback attractors. If almost sure convergence in part (c) of the definition is replaced by convergence in probability, then A is called a weak attractor, see [12]. For a comparison of different concepts of a random attractor for one-dimensional diffusions, see [13].
We will need the following criterion for the existence of an attractor (a much more general result can be found in [7]). For simplicity we formulate it only in case E = R d equipped with the Euclidean metric. Let B r be the closed ball with center 0 and radius r.
be a continuous cocycle over the metric dynamical system Ω, F 0 , P, (θ t ) t∈R . Then the following are equivalent: (i) ϕ has an attractor.
Proof. If ϕ has an attractor A, then for each ε > 0 there exists r > 0 such that A is contained in B r−1 with probability at least 1 − ε. Part (c) of Definition 2.2 therefore implies and therefore (ii) follows. Conversely, (ii) implies the existence of a random absorbing set which in turn is sufficient for the existence of an attractor (for details, see [8] or [7]).
We will show the existence of an attractor for a class of flows φ satisfying conditions (A1) and (A2). Since attractors are defined for RDS rather than flows, we have to make sure that φ generates an RDS in an appropriate sense. This is done in the following proposition which is proved in [1]. Strictly speaking, the set-up in [1] is formulated using slightly stronger smoothness assumptions on the coefficients of the SDE than in our set-up due to the fact that the authors of [1] use the Stratonovich rather than Itô's integral. It is easy to see however that in the Itô set-up no additional smoothness is required for the following proposition to hold. Proposition 2.4. Let φ be the stochastic flow generated via SDE (2.1) satisfying condition (A1). Then there is an R d −valued continuous cocycle ϕ over some MDS (Ω,F,P, (θ t ) t∈R ) such that the distributions of φ s,t (., .) : −∞ < s ≤ t < ∞ and ϕ(t − s, ., θ s (.)) : −∞ < s ≤ t < ∞ coincide.
From now on we shall identify the flow φ with the associated RDS ϕ in view of the previous proposition. In particular, we will check condition (ii) in Proposition 2.3 with ϕ −1 (t, B r , θ −t ω) replaced by φ −1 −t,0 (B r , ω) and therefore there will be no need to refer to random dynamical systems in the rest of the paper.

Main Result
In the following we denote a closed ball in R d with center x and radius r by B(x, r) and define B r := B(0, r) as before.
In particular, φ has a random attractor.

Remark.
The same number β 0 as in the Theorem also appears in upper bounds for the linear growth rate of the diameter of the image of a bounded set under a flow: assume (for simplicity) thatφ is a flow with b = 0 satisfying (A1) and (A2). Then Theorem 2.3 together with Corollary 2.7 and Proposition 2.8 in [15] show that It seems plausible that adding a drift b to the flowφ which satisfies (A3 β ) for some negative β will reduce the linear expansion rate from β 0 to β 0 + β. In particular, one may expect that linear expansion stops completely as soon as β 0 + β is negative. Part a) of Theorem 3.1 shows that this is indeed true: in fact we get linear contraction of large balls with rate at least −β 0 − β.
For further results concerning upper and lower bounds for the growth rate of the image of a bounded set under a flow we refer the reader to [5,6,10,4,16,11]. [14] contains an example of a flow in the plane which satisfies (A1), (A2) and (A3 β ) for some β < 0 for which no attractor exists (not even a weak one) which shows that Theorem 3.1 becomes wrong if β 0 is replaced by 0 when d ≥ 2. Let us briefly consider the case d = 1. In this case, part a) of Theorem 3.1 says that an attractor exists whenever (A3 β ) holds for some β < 0. In fact, one can say more: if the Markov process generated by the SDE admits an invariant probability measure which is ergodic in the sense that all transition probabilities converge to it weakly (and for this to be true condition (A3 β ) does not need to hold), then the associated RDS automatically admits a weak attractor (for this and more general results on monotone RDS, see [3]).
Note that if φ is a flow which satisfies the conclusion of part a) of the theorem, then the inverse flow (i.e. the flow run backwards in time) satisfies the conclusion of b) and vice versa (at least if φ is onto). Therefore, one could just prove one of the two statements and then prove the remaining one via time reversal. Unfortunately, the assumptions in both parts do not transform accordingly due to the Stratonovich correction term except for cases in which the correction term vanishes (which happens for example in case the driving field M is isotropic).

Proofs
Let us briefly explain the idea of the proof of part b) of Theorem 3.1 (we will explain the necessary changes for part a) later): we will divide the positive time axis into increasingly long intervals [T i , T i+1 ] (T 0 = 0) and let R i be an increasing sequence of positive reals. We will provide an upper bound for the probability q i that the image of We will show that the q i are summable in case the R i and T i are chosen appropriately and then apply a Borel-Cantelli argument. This is not quite enough to prove the result: we have to make sure that we can choose the R i to grow sufficiently quickly and we have to ensure that in between successive T i 's, the image of B R i contains a slightly smaller ball for all t ∈ [T i , T i+1 ] with high probability.
In order to estimate the probability that the image of B R i under φ T i ,T i+1 does not contain B R i+1 , we will cover the boundary of B R i with a large number N of small balls with the same radius. We first provide an upper bound for the probability that a single point x with norm R i will be mapped to a point with norm at most R i+1 + 1 under φ T i ,T i+1 . This probability will typically be very small because the drift tends to push the trajectory away from the origin. We tune N (and the radii of the balls) such that both the probability that at least one of the centers of the N balls moves away too slowly and the probability that any of the small balls attains a diameter of size 1 before time T i+1 are small (i.e. summable over i). The required estimates for the growth of the diameter of a small ball under a flow are provided in the appendix.
We start with a well-known lemma and then proceed with estimates on the one-point motion. We will often write φ t instead of φ 0,t . Lemma 4.1. Let (W t ) t≥0 be a standard Brownian motion. Let W * t := sup s≤t W s be its running maximum. Then for arbitrary c ≥ 0 and t > 0 the following bounds hold: a) If φ satisfies (A3 β ), then for each |x| = R, we have where β * (R) := inf |y|≥R {y · b(y)/|y|}.
Proof. We first show a). Let h be a smooth function from [0, ∞) to [0, ∞) such that h(y) = y for y ≥ 1, 0 < h ′ (y) ≤ 1 for all y > 0 and h ′ (0) = 0 and define ρ t (x) = h(|φ t (x)|). Applying Itô's formula, we get φ s (x)) and, on {|x| ≥R}, For the quadratic variation of N , we have the following bound: The continuous local martingale N can be represented (possibly on an enriched probability space) in the form N t = σ B W ζ(t) , where W is a standard Brownian motion and the family of stopping times ζ(s) satisfies ζ(s) : For |x| = R, we get (using an upper index * to denote the running maximum as before) where we used Lemma 4.1 in the last step. This proves part a).
The proof of part b) is analogous to that of a): just interchange R and S and estimate f from below by β * (R) on the set {|x| ≥R}.
We continue with the proof of part b) of Theorem 3.1 which is slightly easier than that of part a).
Proof. It suffices to prove the statement in case β * (R) > 0. Using the same notation as in the proof of Proposition 4.2, we get for |x| = S: Therefore, using a well-known formula for the law of the supremum of a Wiener process with drift (e.g. [2], p.197), we obtain for T > 0 , so the assertion in the proposition follows.
The following proposition is a rather easy consequence of the preceding two propositions and the results in the appendix.
Proof. We can and will assume that S > ε −1 . For each ξ ∈ (0, S], we can cover ∂B S by balls with radius ξ centered on ∂B S , where c d is a universal constant which only depends on the dimension d. Specifically, we will let ξ = exp{−Γψ(S)}, for some Γ ≥ 0. Denote the balls by M 1 , ..., M N and their centers by x 1 , ..., x N . Using the flow property and Propositions 4.2 and 4.3, we get Clearly, lim S→∞ which is negative if Γ exceeds the larger of the two roots of the right-hand side of (4.2), i.e.
provided that Γ − Γ 0 > 0 is sufficiently small. Therefore, we can find some Γ > Γ 0 satisfying all of these conditions and the proof is complete. Now, we can easily complete the proof of part b) of Theorem 3.1.
Proof of Theorem 3.1 b). Let 0 < γ < β −β 0 and choose ε ∈ (0, 1/2) such that γ +ε < β −β 0 . Let S 0 ≥ 2 and define recursively S i+1 = S i + γψ(S i ). Define p S as in the previous proposition. By the previous proposition, we know that i p S i converges provided that i exp{−cψ(S i )} converges for every c > 0, which is easily seen to be true if we take ψ(x) = x α for some α ∈ (0, 1), and part b) of Theorem 3.1 then follows from the first Borel-Cantelli Lemma and the time-homogeneity of φ.
We now provide the proof of part a) of Theorem 3.1. It is partly analogous to the previous one with the exception that it does not seem to be obvious how to prove the analog of Proposition 4.3. The following two propositions provide additional estimates for the one-point motion.
Proposition 4.5. Let φ be a flow satisfying conditions (A1), (A2) and (A3 β ) for some β < 0 and let V > 1 satisfy β * (V ) ≤ 0. Then, for each S ≥ R ≥ V, x ∈ R d , we have Proof. Define ρ, N , and W as in the proof of Proposition 4.2. Then and the proof of the proposition is complete.
We remark that the statement in the previous proposition can be sharpened if we do not drop the drift term in the proof (see [15], Proposition 2.8) but the statement above meets our demands perfectly. We will also need the following result.
Proof. Let Arguing like in the proof of Proposition 4.2, we get , and, arguing like in the proof of Proposition 4.5, where we used Lemma 4.1 and the fact thatR ≥ V , so the assertion of the corollary follows.
Proof of Theorem 3.1 a). For the reader's convenience, we start by stating all assumptions and notation in the proof. Let ε ∈ (0, 1/2) and γ > 0 satisfy γ Further, V > 1 is a fixed number (not depending on T ) such that β * (V ) ≤ 0. Since we are only interested in asymptotic statements as T → ∞ we can and will assume thatR > V . Let Once we know that lim sup S→∞ 1 ψ(S) log p S < 0, then Theorem 3.1 a) will follow just like part b). To estimate A 1 , we cover ∂B R by N ≤ c d R d−1 e Γ(d−1)T balls of radius e −ΓT centered on ∂B R and we obtain lim sup for an appropriate choice of Γ ≥ 0 as in the proof of Proposition 4.4 (using part a) of Proposition 4.2 instead of part b)).
To finish the proof of Theorem 3.1, it suffices to prove that lim sup Define Y s := sup |x|=R |φ sT (x)| −R + , Z s := sup We treat the two terms in the last sum separately. We start with the second one. We want to show that lim sup To show this, fix 0 ≤ s ≤ T , abbreviateR :=R + T 1/α ε/2 and cover the boundary ∂BR by N ≤ c dR d−1 e Γ(d−1)T balls of radius e −ΓT centered on ∂BR for some Γ > 0 (the constant c d can be chosen to depend on d only). Number the balls by B 1 , ..., B N and their centers by x 1 , ..., x N . Then Estimating the three summands using Propositions 4.2a), 4.5, and Theorem 5.1, respectively, we obtain (4.3) by letting Γ → ∞ (after taking the lim sup over T ).
It remains to show that lim sup Proving this is not entirely straightforward. One might try to proceed as (by now) usual by covering ∂BR × [s − 1, s] by small balls and controlling the diameter of their images at time s and the norm of the images of their centers at time s. One of the obstructions to this approach is that we have no uniform control of the component of the drift b towards the origin, i.e. the norm of the solution process can drop considerably within a very short time (resulting in an uncontrollable increase of the diameter of a small space-time ball within a short time). What we can control is the speed away from the origin thanks to assumption (A3 β ). Therefore we proceed as follows: define Applying Lemma 5.4 with ε j := c/j 2 , j ∈ N (with c = 6/π 2 ), we obtain P sup To estimate the probabilities in the last sum, we cover the boundary ∂BR by For fixed j, t and |x| =R, letx be the projection of φ t,t+2 −j (x) on ∂BR (there will be no need to worry about the possible non-uniqueness ofx). For u ≥ 0 we get There are two terms to estimate. We start with the second one. Recalling that δ = T κ/2 2 3 j/2 , and assuming that |x| =R, we have (4.6) Again, we have two terms to estimate. By Proposition 4.6, we have where we estimated the infinite sum of the form ∞ j=1 p j from above by the geometric series p 1 ∞ j=0 p 2 p 1 j . Since κ > ν > 1, the term converges to zero superexponentially in T .
Next, we estimate the second term in (4.6). Lemma 5.3 and Lemma 4.1 show that for y, z ∈ R d such that |y − z| ≤ δ we have and for |y| =R, |y − z| ≤ δ, we have We use (4.7) for j ≥ T and (4.8) for j ≤ T and assume that T ≥ 1 is so large that (j log 3 2 )/4 ≥ − log( c j 2 ε 4e λ ) holds for all j ≥ T . Applying Proposition 4.6, we obtain, for T sufficiently large Evaluating the geometric series and estimating the sum by T times the largest (namely the last) summand, we see that the whole expression decays superexponentially in T . Finally, we estimate the first term in (4.5). Applying Theorem 5.1a) with q = d + 1 and Lemma 5.3, we get which decays to zero superexponentially as T → ∞ (here h d depends on the parameters of the SDE and on ε but not on T ). Therefore, (4.4) follows and the proof of Theorem 3.1 is complete.

Appendix
To prove Theorem 3.1, we need the following result. Part b) of the following theorem is also contained in [15]. We provide its proof for the reader's convenience (and because it is short).
Theorem 5.1. Let (t, x) → φ t (x) be a continuous random field, (t, x) ∈ [0, ∞) × R d taking values in a separable complete metric space (E, ρ). Assume that there exist numbers Λ ≥ 0, σ > 0 andc > 0 such that for each x, y ∈ R d , T > 0, and q ≥ 1, we have a) For each cube X with side length ξ, T > 0, u > 0, and κ ∈ (0, 1 − d/q) we have P sup Then, for each u > 0, we have where sup X T means that we take the supremum over all cubes X T in R d with side length exp{−γT }.
The proof of Theorem 5.1 relies on the following (quantitative) version of Kolmogorov's continuity theorem which is proved in [15].
Remark. If, in addition to the assumptions in Theorem 5.1, the map x → φ t (x) is one-to-one for all t and ω, then part b) of Theorem 5.1 holds with d − 1 replaced by d in the definition of I(γ) since we can apply Lemma 5.2 to each of the faces of X T and the supremum over x, y ∈ X T is attained for x, y on the boundary of X T .
The following lemma is almost identical to Lemma 4.1 in [15] and Lemma 5.1 in [6]. We provide its proof, since our assumption (A1) is slightly weaker (in some respect) than in those references. Proof. Fix x, y ∈ R d , x = y and define D t := φ t (x) − φ t (y), Z t := 1 2 log(|D t | 2 ).
In the proof of part a) of Theorem 3.1 we need the following one-sided Chaining Lemma (without absolute values).