Level Sets of Multiparameter Brownian Motions

We use Girsanov’s theorem to establish a conjecture of Khoshnevisan, Xiao and Zhong that ` ( r ) = r N ¡ d= 2 (log log( 1 r )) d= 2 is the exact Hausdorﬁ measure function for the zero level set of an N -parameter d -dimensional additive Brownian motion. We extend this result to a natural multiparameter version of Taylor and Wendel’s theorem on the relationship between Brownian local time and the Hausdorﬁ ` -measure of the zero set.


Introduction and results
Let X : R I N → R I d be a multiparameter additive Brownian motion, that is, X has the following decomposition where the X i are independent, two sided d-dimensional Brownian motions. The aim of this paper is to establish a conjecture of Khoshnevisan, Xiao and Zhong, c.f. [5,Problem 6.3], that if 2N > d, for any bounded interval I ⊂ R I N , where φ(r) = r N −d/2 (log log( 1 r )) d/2 and m φ denotes the Hausdorff φ-measure. Note that the question does not arise for 2N ≤ d since a.s. for all t = 0, X(t) = 0 (see e.g. [7, Proof of Theorem 1 (b), p.15] or [8]).
The conjecture followed the result of Xiao [14], as observed in [5], that m φ ({t : X(t) = 0} ∩ I) > 0 a.s. if L(I) > 0 for L the local time as defined below in (1.1), and so implies that φ is the exact measure Hausdorff function for the zero level set of X.
In one dimension the first result of this kind was due to Taylor and Wendel [13], who showed that if X is a one-dimensional Brownian motion, there exists a positive finite constant c such that m φ ({s : X(s) = 0, s ≤ t}) = c L(t) a.s., for all t > 0, where L is the local time at zero of X, that is, We adopt arguments found in Perkins' and Taylor and Wendel's articles, and ultimately prove the following result. where c d is the d-dimensional volume of the unit sphere and · denotes Euclidean norm in R I d .
Remark 1.2 It follows from [14], that the constant c is necessarily strictly positive.
For proofs on existence of local times and their continuity properties in the multiparameter context see [3] and [4].
Recall that an exact Hausdorff measure function for a set E is meant a function ψ(r) defined for small r ≥ 0, vanishing at the origin, increasing and continuous, such that the Hausdorff ψ-measure of the set E defined by is almost sure positive and finite, where |I i | is the diameter of the set I i . See [2] or [11]. Essentially, there is at most one correct function ψ for a given set E, in the sense that if m ψ 1 (E) ∈ (0, ∞) and if A natural covering of, say, the zero set of X in the interval [1,2] N , is to divide up the cube into 2 N n subcubes of side length 2 −n and to take as a cover the collection of subcubes which intersect the zero set. Now, since the variation of X on such a cube is of order 2 −n/2 , we can see that the probability that a given subcube intersects the zero set is of the order 2 −nd/2 and so, for the resulting cover, for some finite constant K not depending on n. By Fatou's lemma, a.s. there exists a sequence of covers of the zero set intersected with [1,2] N , {I n i } n≥1 , for which the maximal diameter tends to zero and for which This implies that m ψ ({t : for ψ(r) = r N −d/2 . But it does not, conversely, show that m ψ > 0 : m ψ is defined via optimal coverings rather than individual, given coverings. In fact there exist better coverings for which the lengths of the diameters vary, and ultimately it can be shown that m ψ ({t : X(t) = 0}) = 0. For our purposes we note that, restricting to planar axes P i = {t : t i = 0}, we have, for all R, m ψ ({t : X(t) = 0} ∩ P i ∩ B R ) < ∞ a.s., for ψ(r) = r N −1−d/2 and B R = {t : t ≤ R}. Finally, one can show the following result. Lemma 1.3 For X a multiparameter additive Brownian motion from R I N to R I d and φ as previously defined, A natural way to get good coverings of an awkward set such as the zero set of X is to exploit a kernel L : Ω × B( R I N ) → R I + which is a.s. supported by the zero set. If we could choose a disjoint random cover {I i } of, say, {t : X(t) = 0} ∩ [1,2] N for which a.s.
We propose to follow this heuristic with the kernel L given by multidimensional local time defined by (1.1).
A problem is that it may be unavoidable that a covering includes some intervals for which the inequality φ(|I i |) ≤ L(w, I i ) does not hold. We need a way of picking the covering so that φ(|I i )) is small where the sum is over I i for which the relation φ(|I i |) ≤ L(w, I i ) fails.
For us this amounts to finding good probability bounds on the law of the iterated logarithm failing for local times in multidimentional setting and represents the bulk of our work. Essentially we want to get a reasonable lower bound on the probability of local time being large: we begin with product Wiener measure, P, over the space of R I d valued continuous functions defined over [0, t], V i , i = 1, ..., N . We then consider the (equivalent) measure Q with respect to which the V i are independent Ornstein-Uhlenbeck processes dV i (r) = dB i (r) − cV i (r)dr, where the B i are independent Wiener processes under Q, and use the equality P{A} = A dP dQ dQ to estimate Wiener measure of an event A. The paper is planned as follows. In Section Two we analyze certain additive Ornstein-Uhlenbeck processes and establish various "laws of large numbers". In terms of the above equation, this means to find good bounds on Q{A} for relevant A. In Section Three we use Girsanov's theorem to transform information from Section Two concerning Ornstein-Uhlenbeck processes to information on multiparameter Brownian motions. This results in good bounds for dP dQ . This is then used to obtain a large deviations result for multi-parameter Brownian motions and local times. The fourth section proves (see Corollary 4.2) that for X an additive Brownian motion, there exists a positive finite constant K such that on any Finally, in Section 5 we arrive at the proof of Theorem 1.1.
Notation: in this paper we use the standard analogies for one dimensional terms, that is, an interval I ⊂ R I N is of the form I 1 × I 2 × · · · × I N where the I i are one dimensional intervals, be they open, closed or half open, half closed. We say multi-time interval when we wish to emphasize the higher dimensional aspect. Two points u = (u 1 , u 2 , ..., u N ), v = (v 1 , v 2 , ..., v N ) ∈ R I N satisfy the relation u < v (resp. u ≤ v) if and only if for every i = 1, 2, ..., N, u i < v i (resp. u i ≤ v i ). For two such vectors, [u, v] will denote the interval Given a real number s and a given dimension, the vector s will denote the vector of the given dimension, all of whose components are equal to s in value. Given a vector u and a scaler s, u + s will denote the vector u + s.
As is common, throughout c, C, k, K will denote constants and their specific value may change from line to line or even from one side of an inequality to another. For v a vector in R I d and Σ a positive definite d × d matrix, N (v, Σ) will denote the corresponding Gaussian distribution.

Local times for Ornstein-Uhlenbeck processes
Let (Ω, F, Q) be a complete probability space and let X c : R I N + → R I d be an additive Ornstein-Uhlenbeck process, that is, where the (X c,i (r), r ≥ 0), i = 1, ..., N , are d-dimensional Ornstein-Uhlenbeck processes defined on (Ω, F, Q), each independent of the others, and so that where the W i are d-dimensional independent Brownian motions on (Ω, F, Q).
As is well known, we can write We consider the local time at 0 of X c in the time interval [0, t] N , defined as in (1.1), that is, We start by proving the following result.

3)
where for f, g real functions, f = O(g) means that there exists a finite positive constant K such that |f | ≤ K|g|.
Proof. We start by proving (2.2). One can easily check that where p c (0, r) is the conditional density at 0 of X c (r) given X c (0). Note that by (2.1), Then, if r ∈ H, using the condition on X c (0), it is easily checked that, for t << 1, Therefore, where K is a positive finite constant not depending on t. Hence, , and this concludes the proof of (2.2). In order to prove (2.3) we write where p c (0, 0, r, r ) denotes the conditional joint density at (0, 0) of the random vector (X c (r), X c (r )) given X c (0). Note that, by (2.1), p c (0, 0, r, r ) can be written as the product of p c (0, r ) and the conditional density at the random point The latter is a Gaussian density with mean and covariance matrix given by Then, the proof of (2.3) follows along the same lines as the proof of (2.2) by considering the different cases of r, r of being in H and H c ∩ [0, t] N , and using the estimates of ρ(r, r ) obtained in [1, Section 3] and the condition on X c (0). ♣ Using Lemma 2.1 one easily deduces the following result.
Proposition 2.2 Let X c be an additive Ornstein-Uhlenbeck process defined as above and such that X c, Then, as t tends down to 0, Finally, we shall need the following additional result.
.., N be independent Ornstein-Uhlenbeck processes defined as above and such that X c, where h is a positive finite constant. Then, as t tends down to zero, Therefore, A similar calculation shows that (1)).
Therefore, as t tends down to 0, and the desired result follows. ♣

Girsanov's theorem
For the following, we refer the reader to [10, Chapter VIII]. Let X : R I N → R I d be an additive Brownian motion on the standard d-dimensional Wiener space (Ω, F, P), that is, where the X i are independent d-dimensional Brownian motions. We define a probability measure Q on (Ω, F) such that where {F t } t≥0 denotes the natural filtration of the Brownian motion. By Girsanov's theorem, under Q, the processes (X i (s), 0 ≤ s ≤ t) are independent d-dimensional Ornstein-Uhlenbeck processes, for i = 1, ..., N , and are martingales, and in fact Brownian motions. Moreover, Then, we have the following result.
Lemma 3.1 Let X be an additive Brownian motion on (Ω, F, P) defined as above and such that X c, where h a positive finite constant not depending on t. Consider the event Then, as t tends down to 0, Q{A t } tends to 1.
Fix > 0 and consider the event where τ g = inf{s : which, by the reflection principle, has Q-probability equal to which tends to 1 as t tends down to 0. Now, consider the events Under Q, the processes X i are Ornstein-Uhlenbeck processes with drift indexed by c and so, by the argument above and Proposition 2.3, we have Thus, given fixed to be less than 1/2, we have ♣ We finally arrive at a lower bound for a local time large deviations of the process X.
Proposition 3.2 Let X be an additive Brownian motion on (Ω, F, P) defined as above. Then, for h fixed and sufficiently small and for all t sufficiently small, we have, on the Proof. Note that by linearity it makes no difference if we assume that Let Q be the probability measure on (Ω, F) defined in (3.1) with c = h log log( 1 t ) t , and consider the event By Lemma 2.1 and Proposition 2.2, as t tends down to 0, Q{D t } tends to 1, uniformly over { X(0) ≤ t h log log(1/t) }. On the other hand, by Lemma 3.1, as t tends down to 0,

♣
We will use Proposition 3.2 in the following way. Fix i ∈ Z Z N and consider the 2 −2n side cube We note that, as for large n, 2 −2n ≤ n 2r 2 2n + n 2r+1/4 2 2n << 2 −n , for all r = 1, ..., √ n, we have that Fix h ∈ (0, 1) and consider the event A(r, h) := A(i, r, h) defined by Proof. Note that by the strong Markov property, the process ( N j=1 X j (T r j + s j ) : s ≥ 0) conditioned on F r , is equal in law to the process ( N j=1 X j (s j ) : s ≥ 0) started at N j=1 X j (T r j ). Then, for n sufficiently large and h sufficiently small, and uniformly over 1 ≤ r ≤ √ n, Therefore, using Proposition 3.2 we obtain the desired result for n large. ♣ We now fix h > 0 such that Lemma 3.3 holds for all n sufficiently small and let A(r) denote the event A(r, h) for this h fixed. Proof. Consider the event By the independent increments property scaling and standard Brownian bounds, we have Finally, by Lemma 3.3,

Corollary 3.5
For h fixed sufficiently small, i > c > 0, and n sufficiently large, the probability that , is bounded by kn d+1/2 2 −nd e −n 1/8 /4N , for k not depending on n.

The Hausdorff measure function
In this section, we suppose that h has been fixed at a strictly positive value small enough so that Corollary 3.5 holds. We prove the following result. (ii) it contains a point in {t : X(t) = 0} but the variation of X in the cube is greater than n2 −n ; (iii) (i) & (ii) above do not apply, but there is no 2 −2n ≤ s ≤ 2 −n so that (iv) (i), (ii) & (iii) do not apply.
We now proceed to the construction of the covering of {t : X(t) = 0} ∩ I. We denote by i the cube [ i 2 2n , i+1 2 2n ]. Given a cube i satisfying (i), let C i be ∅. For cubes satisfying (ii) or (iii), let C i = i 2 2n , i+1 2 2n . Finally, for cubes satisfying (iv), let C i = i 2 2n , i 2 2n + s , where s is the largest s ≤ 2 −n such that By Vitali covering theorem (see e.g. [2]), we can find a subcollection E of the set of i satisfying (iv), such that where D i is the cube with the same centre as C i but 5 times the side length.
Consequently, we consider as a covering of {t : X(t) = 0} ∩ I, Now, by definition of the D i and condition (iv), for i ∈ E, where, as before, φ(r) = r N −d/2 log log( 1 r ) d/2 . Therefore, since the C i are disjoint for  Proof. We write I as the disjoint sum of intersections of I with the planar axes P i , and open rectangles which are disjoint from planar axes but whose closures are not. For the first case, by Lemma 1.3, For the second case, without loss of generality we consider a rectangle R contained in the "quadrant" (0, ∞) N . Then, by Proposition 4.1 and the a.s. continuity of L( N i=1 [x i , y i ]) with respect to x i , y i , we have, a.s., The final result follows from the additivity of the local time and the Hausdorff measure. ♣ 5 Proof of Theorem 1.1 As a prelude to the proof of Theorem 1.1, we show the result for a fixed interval I bounded away from the axes. The extension to the ultimate result will then be standard. where φ(r) = r N −d/2 (log log( 1 r )) d/2 . Proof. In order to simplify the notation we assume I = [1,2] N . It will be clear that the proof covers all the cases claimed.
We will construct a set of random variables {V n M , n, M ≥ 0} such that (i) for all δ > 0 there exists M 0 such that for all n and M ≥ M 0 , for some constants c M → c ∈ (0, ∞), as M → ∞. This will imply the desired result. Let D I n denote the set of time points in R I N + of the form ( i 1 2 2n , i 2 2 2n , · · · , i N 2 2n ), where the i j are integers. For every n and x ∈ D I n ∩ [1, 2] N , we write . Note that by Proposition 4.1, there exists a finite, non random constant k such that We need some preliminary lemmas.
Lemma 5.2 There exist two positive finite constants c 1 , c 2 not depending on n, such that uniformly over x, y ∈ D I n ∩ [1, 2] N , Let g t (·) denote the density of X(t). Then, for x ≥ 1, This proves (a). In order to prove (b), let g s,t (·, ·) denote the joint density of the random vector (X(s), X(t)). Then, g s,t (0, 0) dsdt.
As is easily seen, for s, t ≥ 1, g s,t (0, 0) ≤ c t − s −d/2 for c not depending on s or t, from which the desired result follows. ♣ This will ensure that equation dz → c as M tends to infinity. This limit must be finite by inequality (5.3), while, as noted in Remark 1.2, it is necessarily strictly positive.
Proof of Proposition 5. 4 We will detail the proof of (5.5). The proof of (5.4) follows along the same lines and is left to the reader. In order to simplify the notation we only treat the case N = 2 but the approach extends to all time dimensions. The approach has some similarities with that of [6].
In order to prove (5.5), we write We consider two different cases: by bounds for g x,y (0, 0) and Lemma 5.2.
] over x, y satisfying (A) tends to zero as n tends to infinity.
We need to consider different cases. We first assume that y 1 > x 1 (and so y 1 > x 1 +2 −n/2 ) and y 2 > x 2 (and so y 2 > x 2 + 2 −n/2 ). Then, by the definition of f M and Markov properties of X, we have Therefore, The case x 1 > y 1 and x 2 > y 2 follows similarly.
In order to treat the other cases we shall need some basic lemmas.
Proof. It is sufficient to show the corresponding inequality for L n y . We have where u = X(y), v = X(y +(0, 2 −2n )) and g 2 −2n s,t (0|u, v) is the density at 0 of an independent N (0, sI d ) in R I d convoluted with a d-dimensional Brownian bridge at time t going from u at time 0 to v at time 2 −2n (i.e.
Proof. By basic inequalities for the joint densities, we have On the other hand, it is easy to see that Therefore, and the desired result follows. ♣ Lemma 5.8 Let (B(t), t ≥ 0) be a Brownian motion in R I d with B(0) = 0. Let g 2 −2n (y|v, x) denote the conditional density of B(2 −2n ) given B(v) = x. Then, for x ≤ 3n 2 and v ∈ [2 −n/2 , 1], we have where K does not depend on n, y, v or x.
Proof. By the independence of the increments of the Brownian motion, we have As x ≤ 3n 2 , v ∈ [2 −n/2 , 1] and y ≤ n2 −n , we have Therefore for x ≤ 3n 2 , v ∈ [2 −n/2 , 1] and y ≤ n2 −n , and n sufficiently large, It now suffices to choose K sufficiently large in order to cover the remaining finite number of n's less. This proves (i). Equally, with the same hypotheses on v and x but with y ≥ n2 −n , it holds that where K does not depend on n, y or x.
In the following we need only consider X(x) such that f M (2 n X(x)) > 0. Now let us define f (x, z) by (z|x 2 − y 2 , X 2 (x 2 ) − X 2 (y 2 )) − g 2 −2n (z) f (2 n X(y), 2 n z)dz|, on X(y) ≤ M 2 −n , and is equal to zero on X(y) > M 2 −n . By Lemmas 5.5 and 5.8 this is bounded by K2 −2N n+nd 2 −n/3 for K not depending on n. Thus, since the conditional probability that X(y) ≤ M 2 −n given Y n,M x and f M (2 n X(x)) is bounded by K2 −nd x−y d/2 , we obtain the claimed bound. for c = lim M →∞ R I d f M (x)dx. Thus, we have that outside a null set this relation holds for all such intervals I with rational endpoints. By the a.s. continuity of L( N i=1 [x i , y i ]) with respect to x i , y i we deduce the relation simultaneously for all intervals not intersecting the planar axes. Using continuity again and Lemma 1.3, we extend the relation to all I contained in a closed orthant, as in the proof of Corollary 4.2. The final result follows from the additivity of the local time and the Hausdorff measure and the fact that any interval I can be split up into disjoint subintervals contained in orthants. ♣