A Stochastic Two-point Boundary Value Problem

We investigate the two-point stochastic boundary-value problem on [0, 1]: U = f (U) ˙ W + g(U, U) U (0) = ξ U (1) = η. (0) where ˙ W is a white noise on [0, 1], ξ and η are random variables, and f and g are continuous real-valued functions. This is the stochastic analogue of the deterministic two point boundary-value problem, which is a classical example of bifurcation. We find that if f and g are affine, there is no bifurcation: for any r.v. ξ and η, (0) has a unique solution a.s. However, as soon as f is non-linear, bifurcation appears. We investigate the question of when there is either no solution whatsoever, a unique solution, or multiple solutions. We give examples to show that all these possibilities can arise. While our results involve conditions on f and g, we conjecture that the only case in which there is no bifurcation is when f is affine.


Introduction
This problem arose from an attempt to solve the stochastic Dirichlet problem on a nice domain D in R 2 with multiplicative white noise: 2 = f (U (x, y)) Ẇ (x, y) , (x, y) ∈ D U (x, y) = 0, (x, y) ∈ ∂D .
where f is a smooth function and Ẇ is a white noise on R 2 .Early calculations showed that something goes very wrong for most f 's, and we decided to look at a simpler situation to understand why.The simplest non-trivial setting of this problem is in one dimension, where it becomes the two-point boundary value problem: ) where Ẇ is a white noise on [0, 1], ξ and η are random variables, and the little circle indicates Stratonovich integration.
It was immediately clear why the two-dimensional problem was pathological: even in the simplest non-stochastic setting, this problem exhibits bifurcation, and the two-dimensional problem can be expected to be much worse.Indeed, consider the non-stochastic problem u (t) = λg(u(t)) , 0 < t < 1, (1.2) where g(0) = 0 and g (0) > 0. This always has the solution u ≡ 0, but as λ increases, it bifurcates, and at certain λ there are multiple nontrivial solutions.
The question is then how the presence of noise affects this bifurcation.It turns out that it regularizes the problem to a certain extent.If f and g are affine, for instance, and f does not vanish, the solution of (1.1) exists and is a.s.unique.However, as soon as the coefficients are non-linear, bifurcation seems to re-appear: there will be a positive probability of multiple solutions.("Bifurcation" may not be the correct word here: there is no parameter λ in the problem, but its place is essentially taken by the stochastic variable ω.) The deterministic version of the problem (where f ≡ 0) has been extensively studied.See [8] and the references therein.Variations of the stochastic problem have been studied before by several authors.Ocone and Pardoux [7], generalizing and extending work of Krener [2], considered systems of SDE's with general linear boundary conditions and affine functions f and g.Under some non-degeneracy conditions, they showed that there exists a unique solution which satisfies the sharp Markov property.
Dembo and Zeitouni [1] studied the corresponding Neumann problem and showed the existence of a global solution for f and g satisfying certain properties.
Nualart and Pardoux [4] looked at the case where f was constant, and showed that under certain conditions on g there exists a unique global solution.Moreover, under these conditions, the solution has the sharp Markov field property if and only if g is affine.
In this paper we concentrate on the case where f is not constant.It is rather rare to have a unique global solution: basically, that happens only when f is affine.Existence is more common: if f and g are asymptotically linear, for instance, a global solution exists, but it may not be unique unless f and g are actually affine.We give an example in which there is no global solution at all, and others in which there are sets on which there is one solution, exactly two solutions, or many solutions.
The equations (1.1) are shorthand for the integral equations where {W t , t ≥ 0} is a standard Brownian motion, the circle denotes Stratonovich integration and Y is the (unknown) value of U 0 .(White noise is just the derivative-in the distribution sense-of Brownian motion.We use it here to write the differential equation in an intuitive form, but it is the integral equation which has rigorous meaning.) It is convenient to restate the above as a system by letting V = U : Here, the random variables ξ and η are given, but Y , which is the initial value of U , is not.
Allowing g to depend on V as well as on U causes no extra difficulty; allowing f to depend on V , however, would fundamentally alter the problem.
All these statements of the problem are equivalent.Thus, if Ω 0 is a measurable set with P {Ω 0 } > 0, we say that a pair (U, V ) of processes is a local solution of (1.1) (or of (1.3), or (1.3)) on Ω 0 if it satisfies (1.3) a.e. on Ω 0 .We say it is a global solution if P {Ω 0 } = 1.Similarly, we say that a process U is a local (resp.global) solution if there is a process V such that the pair (U, V ) is a local (resp.global) solution.(Notice that we use the words "local" and "global" in a slightly non-standard sense: they refer to the probability space, not the parameter space.)

The Main Results
Let (Ω, F , P ) be a probability space and let (F t ) ⊂ F be a right-continuous filtration.Let {W t , t ≥ 0} be a standard Brownian motion adapted to (F t ).
Here are the main results of the paper.First, in the linear case, we have the following existence and uniqueness result.Remark 2.2 If f ≡ 0, the equation is deterministic and the second condition just rules out the well-known case in which uniqueness fails.Notice that once f is non-trivial, the white noise regularizes the problem, i.e. there is always a unique global solution.
Moving on to the non-linear case, we look at the existence of global solutions.They will always exist if f and g are asymptotically linear.Theorem 2.3 Suppose f and g are differentiable, and their first (partial) derivatives are bounded and Hölder continuous.Suppose further that there exist constants a, c, and d such that and either a = 0 or a = 0 and c + Then for any random variables ξ and η, (1.3) has at least one global solution.

Remark 2.4
This covers a fairly large class of coefficients.If f and g are bounded, or if they just increase more slowly than linearly, then a, c, and d will vanish, and the theorem implies that there is always at least one global solution.
There are no adaptedness restrictions on the random variables ξ and η in Theorem 2.3.In fact there is a trade-off between adaptedness restrictions and regularity conditions on the coefficients.If we assume some adaptedness of ξ and η we can weaken the regularity hypotheses somewhat.
Theorem 2.5 Suppose f and g are Lipschitz continuous functions satisfying (2.4).Then for any F 0 -measurable r.v.ξ and any random variable η, (1.3) has at least one global solution.
Remark 2.6 By time-reversal, the same conclusion holds if ξ is arbitrary and η is independent of F 1 .
Let us now look at some conditions which ensure local existence and uniqueness.In the deterministic case, the two-point boundary-value problem has a unique solution if the interval in question is small enough.In the stochastic case, this remains true if the interval is short and the Brownian motion is small.Again there is a trade-off, this time between the length of the interval and the Lipschitz constants of f and g.Thus, let us pose the problem on the interval [0, T ] for a fixed T > 0.
The first theorem shows that if f is relatively smooth and g is Lipschitz with a small enough Lipschitz constant, there exists a unique local solution.
Theorem 2.7 Let f ∈ C (1) with f Lipschitz, and suppose g is Lipschitz with Lipschitz constant , then there is a set Ω 0 ⊂ Ω of positive probability such that for any random variables ξ and η, (2.5) has a unique solution on Ω 0 .If f and g are fixed, P {Ω 0 } −→ 1 as T → 0.

Remark 2.8
Only the Lipschitz constant of g appears explicitly.The Lipschitz constant of f is implicit: Ω 0 will be a set on which W t is small.The smaller the Lipschitz constants of f and f , the larger W t can be.
If the solution is not unique, it is natural to ask how many solutions there can be.We will restrict ourself to the case where g = 0. Consider the problem Theorem 2.9 Suppose that f ∈ C (1) , f has at least one zero, and f is Lipschitz continuous.
(i) If f is not an affine function, then there exists a point x 0 such that the problem (2.6) has at least two global solutions.
(ii) Suppose that f has an isolated zero at x 0 .Let ρ > 0 be the distance from x 0 to the nearest other zero of f .If f is not affine on (x 0 − ρ, x 0 + ρ), then for each integer k ≥ 1 there exists a set Ω k ⊂ Ω of positive probability such that on Ω k there are at least k distinct local solutions of (2.6).
If we put some small conditions on f , we can show that there is global existence and uniqueness for the two-point boundary value problem if and only if f is affine.We conjecture that this is true in general, with no added conditions on f .Theorem 2.10 Let f be a differentiable function with bounded derivative.Suppose that f has at least one zero, is not identically zero, and its derivative is Lipschitz continuous.
Then the equations We do not know how Nualart and Pardoux' results [4] extend to non-constant f , but Theorems 2.9 and 2.10 strongly suggest that the SMP will fail for the solution(s) U t of (2.5) if f is not affine.In fact, Theorem 2.9 shows that U t cannot have a transition probability, i.e. the conditional distribution of {U t , t ∈ [a, b]} is not determined by U a and U b .Indeed, if it were, the solution to (2.5) would have to be unique in law.(Take [a, b] = [0, 1], so that U a and U b are deterministic, and the conditional and unconditional distributions are the same.)But it is not: by Theorem 2.9 there are non-trivial solutions as well as the constant solution U t ≡ x 0 .
Here is a class of equations for which there are no global solutions.Theorem 2.12 Suppose f and g are Lipschitz, and that f (x) exists for large enough x.Suppose there exist constants a ± , b ± and c ± such that uniformly in y.If either a − = a + , or else a − = a + and 4b − + c 2 − = 4b + + c 2 + , then there exists a set Ω 0 of positive probability and an unbounded interval (α, β) such that (1.3) with the boundary conditions U 0 = ξ = 0 and U 1 = η ∈ (α, β) has no solution on Ω 0 .

The Initial Value Problem
A standard method of solving the two-point boundary-value problem is to pose it as an initial value problem, and then vary the initial derivative to get the desired value at the end point.Accordingly, we will consider the initial-value problem which is equivalent to the system of integral equations: (3.9) Suppose that both f and g are Lipschitz continuous functions.Evidently V is locally integrableotherwise the equations don't make sense-so that U must be absolutely continuous.Since f is Lipschitz, t → f (U t ) is again absolutely continuous, hence a.e.differentiable, and it is easy to verify that for a.e.t its derivative is f (U t )V t .Thus, following [5], we can integrate by parts in the Stratonovich integral to see that (3.8) is equivalent to One consequence of this is that the initial-value problem can be solved omega-by-omega: the exceptional null sets involved in the definition of the stochastic integrals do not depend on the initial conditions.Denote the solutions of (3.10) by U t (x, y) and V t (x, y).We then see from Kunita's Theorem [3] that Theorem 3.2 (Kunita) If f and g are Lipschitz, then for a.e.ω we have (a) the pair (U t (x, y, ω), V t (x, y, ω)) is continuous in (x, y, t); (b) for a.e.ω, we have for all t that (x, y) → (U t (x, y, ω), V t (x, y, ω)) is a homeomorphism of R 2 onto itself.
Moreover, if f and g are n-times Lipschitz continuously differentiable, then U t (x, y) and V t (x, y) are n-times continuously differentiable in (x, y), and if p and q are positive integers with p+q ≤ n, and if D p,q = ∂ p+q ∂ p x ∂ q y , U = D p,q U (x, y) and V = D p,q V (x, y), then U and V satisfy We say that (U t (x, y), V t (x, y)) is a flow of homeomorphisms.This flow of homeomorphisms will be adapted if the initial values are deterministic.Moreover, as the integrands are absolutely continuous, the stochastic integrals in (3.10) can be interpreted as either Stratonovich or Ito integrals-their values will be the same either way.However, if the initial values are random, the flow need not be adapted and the Ito integrals may no longer be defined.Nevertheless, they will still be well-defined as Stratonovich integrals, and we have ) be Lipschitz continuous functions, and let X and Y be random variables.Then there exists a solution (U, V ) of (3.10), and, moreover, with probability one, (U t , V t ) = (U t (X, Y ), V t (X, Y )) is a solution, where {(U t (x, y), V t (x, y)), t ≥ 0, (x, y) ∈ R 2 } is the flow generated by (3.8).If, moreover, f is Lipschitz continuous, then this solution is unique.

Remark 3.4
There is a nuance here.We know from Kunita's theorem that the flow is unique among adapted flows.However, we need it to be unique among all-including non-adaptedflows.Since (3.10) is an ordinary integral equation, no stochastic integrals enter, so that if we choose ω so that t → W t (ω) is continuous, the theory of (non-stochastic) integral equations guarantees that the solution will be unique if f and g are Lipschitz.This unique solution necessarily coincides with the solution guaranteed by Kunita's theorem.
4 The linear case: proof of Theorem 2.1 We will prove the existence and uniqueness of the solution of the two-point boundary value problem in the linear case.This has been treated by Ocone and Pardoux [7] in a setting which is more general than ours in some respects, but which requires that the boundary values be regular in a Malliavin calculus sense.These conditions turn out to be unnecessary in our case.As our result is rather simple, we will give a separate proof here.
Let f and g be affine, say, f (x) = αx + β and g(x, y) = γx + κy + λ where α, β, γ, κ and λ be real numbers.Then the two-point boundary value problem (1.1) becomes We will use the initial value problem to solve this.Set U 0 = ξ.To show the existence of a solution, we must prove that we can choose V 0 to make U 1 = η.To show uniqueness, we must show there is only one possible value of V 0 .
Let X and Y be random variables, and consider the initial value problem (3.9) with random initial conditions: (4.12)By Proposition 3.3 there is a solution of this initial-value problem for any X and Y , and it is unique.
Since the problem is linear, we can decompose any solution of (4.12) into the sum of a particular solution plus a linear combination of solutions of the associated homogeneous equation.We will simplify our notation slightly by writing Thus any solution Z = (U, V ) of the system (4.12) can be written uniquely in the form where Z p is a solution of (4.12) with initial value Z p 0 = (0, 0), and Z 10 and Z 01 are solutions of the associated homogeneous equation-i.e.(4.12) with β = λ = 0-with initial values Z 10 = (1, 0) and Z 01 = (0, 1) respectively.Since Z 0 = (X, Y ), evidently 01  1 , so that as long as It follows that we have a unique global solution if and only if P {U 01 1 = 0} = 0. We have two cases.First suppose α = 0, which is the deterministic case.The equation for U 01 t becomes U = γU + κU with initial conditions U 0 = 0, U 0 = 1.We can solve this explicitly, and we find that Otherwise, if α = 0, the process Z t is a diffusion process, and by Hörmander's theorem we see that Z 1 has a density in R 2 , and hence the probability that Z 1 lies on the axis x = 0 is zero.This proves Theorem 2.1.

Existence of global solutions
One way to show the existence of a solution of (1.3) is to consider the vertical line L 0 ≡ {(x, y) : x = ξ} in the plane, and follow its mapping by the flow of homeomorphisms is a homeomorphism, L t must be a non-self-intersecting continuous curve.Take t = 1.If L 1 has a non-empty intersection with the line x = η, then there exists Thus (1.3) has a solution if and only if L 1 intersects the line {x = η}, and it has a unique solution if and only if this intersection is a singleton.In particular, since L 1 is continuous we have Lemma 5.1 A sufficient condition for the existence of a global solution of (1.3) is that the projection of L 1 on the x-axis is the whole axis.
It is worthwhile to see how this works in the linear case.If f and g are affine functions, then by (4.13) the image of the line which is again a line.Unless it is vertical, it will have a unique intersection with {x = ξ}, and it can only be vertical if U 01 1 = 0. Except for trivial deterministic cases, this happens with probability zero.
Turning to the non-linear case, let us prove Theorems 2.3 and 2.5.We must compare the solutions of two initial-value problems.Let f , g be Lipschitz functions, let f and ĝ be linear functions, and consider the solutions (5.15) We will need two lemmas.Lemma 5.2 Suppose f and g are Lipschitz continuous, with Lipschitz constants dominated by L, and suppose f and ĝ are affine.Put (5.16) Proof.From (5.14) and the Schwartz inequality, Similarly Now the initial conditions are adapted, hence so is the solution, so the stochastic integral is an Ito integral.We compute its expected square as usual to see it is This implies (5.16) by Gronwall's inequality.
Lemma 5.3 Suppose that f and g are differentiable with bounded Lipschitz continuous derivatives.Let T > 0 and M > 0. Then there exists constants C and D, depending on T , M , and the functions f , g, f , and ĝ, but not on x or y, such that for Since f and ĝ are linear, not just affine, (4.13) tells us that Ẑt (x, y) = x Ẑt (1, 0) + y Ẑt (0, 1), so Ẑt (x, y) − Ẑt (0, y) = x Ẑt (1, 0) and (5.18) Now Ẑt (0, y) = y Ẑt (0, 1), so by Lemma 5.2, there exists a constant c such that (5.20) Notice that the coefficients of (5.20) are bounded (because the derivatives of f and g are), and the initial conditions are independent of x and y.Thus it is easy to see that there is a function D(t) which is independent of x and y such that and the conclusion follows by combining (5.18), (5.19), and (5.21).

Proof of Theorem 2.3
By Lemma 5.1 it is enough to show that the projection of L 1 is R, or equivalently, that inf y U 1 (ξ, y) = −∞ and sup y U 1 (ξ, y) = ∞.We will do this by comparing the solution to the solution Û of the linear equation, i.e. of (1.3) with f and g replaced by f (x) = ax, and ĝ(x, y) = cx + dy.It is enough to prove this for the case where ξ is bounded, say |ξ| ≤ M .Consider which converges to zero boundedly as |y| → ∞ by hypothesis.The same is true of the second term.Thus the expectation tends to zero as |y| → ∞, so we conclude that

Proof of Theorem 2.5
Suppose now that the initial condition on U is adapted, i.e. that ξ ∈ F 0 .In this case, if we consider the initial value problem, we can condition on the value of ξ; given ξ = x, U t (ξ, y) ≡ U t (x, y), and we can use Lemma 5.2, which has weaker hypotheses than Lemma 5.3.We must show that, conditional on ξ, inf y∈R U 1 (ξ, y) = −∞ and sup y∈R U 1 (ξ, y) = ∞ with probability one.

The Green's function
The Green's function for the two-point boundary-value problem on [0,T] is Let us also define For an integrable (possibly random) function h on [0, T ], define Gh(t) Proof.The proof is nearly the same in both cases: we simply integrate by parts.In (i), set J(s) = s 0 h(u) du and integrate by parts: The first two terms vanish, and we can write the last integral out explicitly and differentiate it to see (i).In (ii), set K(s) = s 0 Z u • dW u , and integrate the Stratonovich integral by parts: The first two terms on the right-hand side vanish, and we can again write out the integral explicitly and differentiate it.
It is now clear (and quite well-known) that u(t) ≡ Gh(t) satisfies This leads to another statement of the stochastic problem.Proposition 6.2 Let f and g be Lipschitz continuous functions on R and R 2 respectively, and let {U (t), 0 ≤ t ≤ T } be a process which is a.s.absolutely continuous as a function of t.Then (1.3) is equivalent to Plug this in (6.27) and combine terms to see that which is (6.25).Conversely, if U is absolutely continuous and satisfies (6.25) a.e. on Λ, then Lemma 6.1 shows it must also satisfy (1.3) a.e. on Λ.

Proof of of Theorem 2.7
Note first that it is enough to prove this for the case where ξ and η are bounded, say |ξ|+|η| ≤ M , for some M > 0. Since f is bounded and Lipschitz, we let L f be a common bound for |f | and the Lipschitz constant for f .(So L f is a common Lipschitz constant for both f and f .) , and define Integrate the Stratonovich integral by parts to see that , so by Lemma 6.1 while on the other hand Now Z ≤ U + V so, adding, By hypothesis, 3T 2 + T 2 8 L g < 1, so we can choose > 0 and δ 1 > 0 small enough so that If necessary, replace M by max M, 3T 2 + T 2

8
. Now |U 0 | and |V 0 | are both bounded by M , so if we choose δ 1 small enough that (2+T/2)|f 0 |δ 1 ≤ M , we have It follows that for all n, Now consider the increments, using the usual notation: Similarly From the Lipschitz conditions, we see ∆f Adding these two: We can choose δ 2 > 0 small enough so that δ 2 ≤ δ 1 and If ω ∈ Ω 0 , the sequence (Z n t (ω)) converges uniformly for 0 ≤ t ≤ T to a limit Z t (ω), and we can go to the limit on both sides of (6.30), (6.31), and (6.32) to see that Z satisfies (6.25) and hence (1.3) on Ω 0 .
To see uniqueness, suppose that This has the same right hand side as (6.35) with n = 2. V 2 − V 1 satisfies a similar equation, and if we repeat the calculations above, we see that on Ω 0 , Finally, to see what happens for small T , let T → 0 and note that for any Lipschitz g, we can find T 0 > 0 such that for T ≤ T 0 , ( 3T 2 + T 2 8 )L g < 1  4 .We then choose δ 1 > 0 small enough so that Thus by (6.34), T ≤ T 0 and W ≤ δ 1 imply that for all n, hence by (6.37) Then T ≤ T 0 and W ≤ δ 2 imply that ∆Z n+1 ≤ 3 4 ∆Z n .As we saw, this implies existence and uniqueness, so that if we set Ω 0T = {ω : sup 0≤t≤T |W t (ω)| < δ 2 }, the two point boundary value problem will have a unique local solution on Ω 0T .But P {Ω 0T } = P {sup 0≤t≤T |W t | < δ 2 }, which tends to one as T −→ 0.

Counterexamples
One way of constructing counterexamples is to work from the deterministic case.With a few regularity conditions on the coefficients, the solutions are continuous as a function of the integrators, and one can use this to construct stochastic examples from deterministic ones.
For any process {X t , t ≥ 0}, let X * t = sup 0≤s≤t |X s |.Let {N t , t ≥ 0} be a continuous semimartingale with N 0 = 0, and let Z t = (U t , V t ), where U and V are solutions of the system Proof.We can suppose that there exists Gronwall's inequality then yields If we use the (very coarse) inequality LN * T ≤ e LN * T , we get (7.41).
Suppose that M t is another continuous semimartingale with M 0 = 0, and consider the system Proposition 7.2 Let f and g be Lipschitz, and suppose that f is also differentiable with a Lipschitz continuous derivative.Let T > 0. Let Z M = (U M , V M ) and Z N = (U N , V N ) be solutions of (7.42) and (7.40) respectively, such that , there exists a function C(T, M, |Z 0 |), increasing in each of the variables, such that where Proof.Let t ≤ T .Then and Let L dominate the Lipschitz constants of f , f , and g, so that |f hence, by Gronwall's inequality, for t ≤ T .Now use the bound on Z M * T given in Lemma 7.1 to finish the proof.

Remark 7.3 It is easy to modify the above in case
s | ds for some C 1 and C 2 which depend on T , Z 0 and the bounds on N and M .This leads to We will usually use this when M t = W t and N t = t 0 h(s) ds for some deterministic function h, in which case we get Corollary 7.4 Let f ∈ C (1) (R) and suppose f is Lipschitz.Let h ∈ L 1  loc on [0, ∞) and let Ût (x, y) and U t (x, y) be the respective solutions of Let us look at a simple example in which both existence and uniqueness can fail.Let g(x, y) ≡ 0 and set Proposition 7.5 There exists a set Ω 0 with P {Ω 0 } > 0 such that on Ω 0 , the problem has no solution if x < 0 and two solutions if x > 0.
Proof.Consider the solution U t (y) of the initial value problem It follows that the image L t of the y-axis under the map (0, y) → Z t (y) is a pair of rays, which are the images of the positive and negative y-axes.
The image of the positive y-axis is easy to determine.If y > 0, U t is initially strictly positive, which means that U t = f (U t ) = 0, so U t is constant, hence U t (y) ≡ yt for all t ≥ 0. Thus Z t (y) = (yt, y), hence the image of the positive y-axis is contained in the positive quadrant for all t ≥ 0. Now consider the image of the negative y-axis.Let τ = inf{t > 0 : U t = 0}.Suppose that τ < 1 and U τ (1) > 0. In that case, the Markov property implies that Z τ +t (1) will be in the first quadrant for all t > 0, and hence that the image of the negative y axis at time t = 1 is in the first quadrant.Thus, L 1 is in the first quadrant.But this means that (7.48) has no solution if y < 0 and two solutions if y > 0.
To see that it is possible that τ < 1, consider the deterministic problem whose solution is U (t) = −(1/2π) sin 2πt.At time τ = 1/2, U = 0 and U = 1.By Proposition 7.2 the solution of (7.50) will be uniformly close to the solution of on the set where sup 0≤t≤1 |W t + 4π 2 t| is small.This set has positive probability, so there is positive probability that the solution of (7.51) will have τ < 1 and U τ > 0. But now, by uniqueness, the solution of (7.51) and (7.48) are identical up to time τ , which means that the probability that τ < 1 and U τ > 0 is strictly positive for the latter, too, which is what we wanted to show.

Proof of Theorem 2.10
Proof.The problem has a unique solution if f is affine by Theorem 2.1.On the other hand, we claim that the solution to (2.7) can not be unique if f is not affine.Indeed, if f (x 0 ) = 0, then U ≡ x 0 is a global solution of the problem with ξ = η = x 0 .
But by Theorem 2.9, if f is not affine, there exists a set Ω 0 of positive probability and a non-zero function Û which is a solution on Ω 0 .But then the function Ũ defined by is another global solution.

Non-Uniqueness
In this section, we will use U t , for the solution of the two-point boundary-value problem we will use U t (y) for a solution of the initial value problem and we will use Ût (y) for a solution of the deterministic initial value problem Let us consider some properties of the solutions of (8.54).Define Note that y > 0 =⇒ T 1 (y) > 0. Then it is shown in [8] that Theorem 8.1 (Schaaf ) Suppose that f ∈ C (1) and that f is Lipschitz.Then if f (x) > 0 in some interval 0 < x < a, there exists b > 0 such that Lemma 8.2 Let Û (y) be a solution of (8.54).Let τ > 0 and define Ũt (y) by Then Ũt (y) is also a solution of (8.54) with f replaced by τ 2 f .
Lemma 8.3 Let f , a, and b be as above.If f is not linear on (0, a), there exists y ∈ (0, b) such that T 1 (y) = 0.
By Theorem 4.2.5 of [8], 2 on (0, z).(The theorem is proved under additional hypotheses on f , but this formula holds without them.).
, and the lemma follows.

Proof of Theorem 2.9
Our strategy is to first find a solution of the deterministic initial value problem which does about what we want, and then approximate it by a stochastic solution.
Proof.(i) If f is not affine, f is not constant, and we can find a point x 1 such that f is not constant in any neighborhood of x 1 and, moreover, f (x 1 ) = 0. We will assume that x 1 > x 0 , since the case x 1 < x 0 is treated almost identically.By replacing f by −f if necessary, we will By replacing U t by U t − x 0 and f (u) by f (u + x 0 ) we may assume x 0 = 0, and f (0) = 0. Then by hypothesis, f is not affine on (0, x 1 ), so that by Lemma 8.3 there exists b > 0 and y ∈ (0, b) such that T (y) = 0. We assume T (y) > 0. (The case T (y) < 0 is similar.)As T is continuous, there is an an interval The deterministic solution Ût satisfies Ût (d Now define Ũt (y) = Ûτt (y/τ ).By Lemma 8.2, Ũ (y) satisfies Ũ (τ y) = −τ 2 f ( Ũ(τ y)), and We will now approximate Ũ by a solution of the stochastic initial-value problem.Let is a solution of the two-point boundary value problem (8.52).It is not identically zero, since U 0 = Y = 0. Thus we have found a non-constant local solution.
To find a global solution, note that U ≡ 0 is always a solution, so that the function defined by is a global solution which is not identically zero.Since U ≡ 0 is also a global solution, this finishes the proof of (i).
(ii) As in (i) , we begin by constructing solutions of the deterministic initial-value problem.This time, we want the solutions to return repeatedly to the origin.The key observation is that if a solution of the deterministic problem returns to the origin twice, it is actually periodic, and returns infinitely often.By rescaling, we can find solutions that have different numbers of zeros in [0, 1].We then use the approximation theorem to find similar solutions to the stochastic problem.
By translating the problem, we may assume that x 0 = 0, so that f (0) = 0.By [8] there exists b > 0 such that if |y| < b, T 1 (y) < ∞.Now T 2 (y) = T 1 (y) + T 1 (−y) < ∞.It follows that the solution Ût (y) of the deterministic initial-value problem (8.54) is periodic, so that T m (y) < ∞ for all m.Now f is not affine on (−x 1 , x 1 ) so that by Lemma 8.3, T 1 (y) is not constant on (−b, b).There are two cases.If T 2 (y) is not constant, then neither is T 2m (y) = mT 2 (y).On the other hand, if T 2 (y) is constant, then T 2m+1 = mT 2 (y) + T 1 (y) must be non-constant.We will do the first case in detail, and just indicate the modifications needed for the second case.Let k ≥ 1 be an integer and choose τ > 0 such that Then there exist at least k consecutive integers, m 1 , . . ., But m j T 2 = T 2m j , so that T 2m j (c 1 ) < τ < T 2m j (c 2 ).Thus by the intermediate value theorem, there is a unique y j ∈ (c 1 , c 2 ) for which T 2m j (y j ) = τ , and Thus for small δ > 0, Ûτ (y j + δ) < 0 < Ûτ (y j − δ) , for Ûτ (y j ) = 0 and Û τ (y j ) = y j > 0, while T 2 (y) increases with y.
Let Û k,j (y) = Ûτt (y/τ ).Then by Lemma 8.2 Û k,j (τ y) satisfies the initial-value problem and moreover, It follows from (8.58) that on Ω ρ,τ , Û1 (y j + δ) < −ε/2 < ε/2 < Û1 (y j − δ) , so, by the intermediate value theorem, there exist random variables Y j ∈ (τ y j − τ δ, τ y j + τ δ) such that U 1 (Y j ) = 0.The Y j are distinct because the intervals containing them are disjoint, so that the functions 0, U t (Y 1 ), . . ., U t (Y k ) are k + 1 distinct (because their initial derivatives are unequal) solutions of the two point boundary-value problem (8.52).This shows that there are at least k distinct local solutions of the problem on the set Ω ρ,τ .It is easy to construct global solutions from these, since U ≡ 0 is a global solution: for each j, let U j,k t (ω) = U t (Y j , ω) for ω ∈ Ω ρ,τ and U j,k t (ω) = 0 otherwise.For each j and k this is a global solution, and there are infinitely many of them.
In the remaining case, where T 2 (y) is constant, T 1 (y) must be non-constant.It follows that for each m, T 2m+1 (y) is not constant, so we can simply use the above argument, replacing T 2m by T 2m+1 .

Proof of Theorem 2.12
We will construct a deterministic equation whose coefficients are semi-linear approximations of f and g, and whose solutions have the desired behavior.We then compare the solutions of the stochastic and deterministic equations.Most of the work in the proof comes in handling the solutions for large values of the initial derivative, where we cannot use continuity arguments.
We begin with two lemmas.The first constructs the deterministic equation, and the second shows that when the initial derivative is large, the solution becomes large very quickly, regardless of the coefficients of the equation.
Let us first define semi-linear functions f 0 (x) and g 0 (x, y).Let and consider the deterministic problem This is a second order ODE, and we can solve it explicitly: . Notice that if y > 0, U t will be positive for small t, and if y < 0, it will be negative for small t.Then if A ± (θ) > 0, the solution is and if A ± < 0, the solution will change signs periodically: and the solution is periodic with period π/γ + + π/γ − .If y < 0, U t is initially negative, so it is given by (8.60) and (8.61) with + and − interchanged.
Proof.This comes down to showing that we can choose θ so that U 1 (y) has the same sign for all y.Equivalently, we will show that t = 1 falls in the intersection of some interval of positivity of both U t (1) and U t (−1), or in some interval of negativity, i.e. either 1 ∈ (t 2k , t 2k+1 ) ∩ (s 2j−1 , s 2j ) or 1 ∈ (t 2k−1 , t 2k ) ∩ (s 2j , s 2j+1 ) for some j and k.
Note that the A ± (θ) are linear functions of θ, and consider the two lines in the z-θ plane: , these lines are parallel but unequal, so that there is a θ 0 such that either
Thus, let us first suppose that one of the two alternatives of (8.62) holds.Since the proof is virtually the same for the two, we will assume that A − (θ 0 ) < −π 2 < A + (θ 0 ) and show that U 1 (y) > 0 for all y = 0. (If the other alternative holds, we simply show that U 1 (y) < 0 for y = 0.) Now if y > 0, U d t (y) is either a positive constant times sinh(γ + t) > 0, (if A + (θ 0 ) > 0) or a positive constant times sin(γ + t), (if A + (θ 0 ) < 0) which is again positive for 0 < t ≤ 1 since γ + < π.In either case, U 1 (y) > 0. On the other hand, if y < 0, the solution is initially equal to a negative constant times sin(γ − t).This holds until t = π/γ − < 1, when it changes sign, and, depending on the sign of A + (θ 0 ), equals a positive constant times either sin(γ + (t − π/γ − )) or sinh(γ + (t − π/γ − )).In both cases, U t (y) will still be positive at time t = 1.This leaves the case where a + = a − , a + and a − have the same sign, and there exists θ 0 such that A + (θ 0 ) = A + (θ 0 ) = −π 2 .The idea is nearly the same, but this time we use the fact that U is periodic, and wait for it to come around a second time.Assume both a ± are positive (the case where both are negative is similar) and, to be concrete, assume 0 But now, recall the times s j and t j defined in Remark 8.4.It is easy to see from (8.63) that t 2 < s 3 < 1 < t 3 < s 4 , so that 1 ∈ (t 2 , t 3 ) ∩ (s 3 , s 4 ), which is an interval of positivity of U t (y) for both y > 0 and y < 0. In other words, for this value of θ, U 1 (y) > 0 for all y = 0, as claimed.This finishes the proof.
Proof.It is intuitively evident that if the initial velocity is large, the velocity will remain large for a certain time, regardless of f and g, so that T M will be small.The work comes in verifying the uniformity in ŷ.We rescale: let Ūt = U t /ŷ, and Vt = V t /ŷ.

It remains to show (i).
To do this, integrate (3.9) by parts, setting x = 0: for constants C and D. Integrating, we see that But T M (y) is the first time Ūt (y) − Ūτ (y) hits M/ŷ, and it clearly decreases to τ as ŷ −→ ∞.Thus there is some N > N 1 such that if ŷ > N, C/ŷ + D(T M (ŷ) − τ ) < ρ, and (i) follows.This finishes the proof for the solution of the stochastic equation.Since the only thing we used about Brownian motion above was that |W t | ≤ K, it also applies to the solution of (8.59) as soon as |θ| ≤ K.
Proof.(Of Theorem 2.12.)We will show that there is a set Ω δ and a r.v.Z such that the solution of the initial value problem U 1 (y) is either strictly greater than Z for all y, or strictly less than Z for all y.This implies that there is no global solution of the two-point problem In order to handle the case of large initial derivatives y, we note that if y is large, the solution will spend almost all its time in the region |U t | > M, and in this region, f and f 0 are close.
The key step is to show that for δ small enough and N large enough, if ω ∈ Ω δ , then either U 1 (y) > 0 , ∀|y| > N or U 1 (y) < 0 , ∀|y| > N .(8.66) Once we have this, we simply note that y → U 1 (y) is continuous, so max |y|≤N |U 1 (y)| is finite, hence U 1 (y) is bounded either below or above, and we can take Now consider the case y < 0. The proof is quite similar, with one extra step.Indeed, the idea is that during the time that the solutions are greater than M in absolute value, the coefficients of the deterministic and stochastic equations are very nearly the same, so the solutions are close.On the other hand, if y is large, the times when they are smaller than M in absolute value are so short that the solutions have to behave in approximately the same way.
The solution will initially be negative.The solution to the deterministic equation will return to zero at a time s 1 < 1, and will then stay positive from s 1 to 1.We want to show that the stochastic equation does the same thing.Let T −M (y) and T d −M (y) be the first times that U t (y) and U d t (y) respectively hit −M , and let T −M (y) and T d −M (y) be the next time they (respectively) hit −M .Let τ 1 be the maximum of T −M (y) and T d −M (y), and let τ 2 be the minimum of T −M (y) and T d −M (y).By Lemma 8.6, τ 1 decreases to zero as y tends to −∞, so for large enough y, τ 1 < τ 2 < 1.Moreover, the deterministic function is symmetric about its minimum, so that V d ), it follows that V T −M (y) (y) < (1 − 2ρ)|y|.Now apply Lemma 8.5 again with τ = T −M (y) and ŷ = V τ (y) to see that U t (y) hits zero again before time t = 1, and that at that time, V ≥ (1 − 3ρ)|y|.The process then re-starts from U = 0, V ≥ (1 − 3ρ)|y|, so the argument for y > 0 again applies to show that U t will still be positive at time t = 1.Thus, if we choose N large enough, if ω ∈ Ω δ and |y| > N, U 1 (y) ≥ 0 for all y.This finishes the proof.

Proposition 3 . 1
The equations (3.9) and (3.10) have the same set of solutions.