Exceptional points of discrete-time random walks in planar domains

Given a sequence of lattice approximations $D_N\subset\mathbb Z^2$ of a bounded continuum domain $D\subset\mathbb R^2$ with the vertices outside $D_N$ fused together into one boundary vertex $\varrho$, we consider discrete-time simple random walks in $D_N\cup\{\varrho\}$ run for a time proportional to the expected cover time and describe the scaling limit of the exceptional level sets of the thick, thin, light and avoided points. We show that these are distributed, up a spatially-dependent log-normal factor, as the zero-average Liouville Quantum Gravity measures in $D$. The limit law of the local time configuration at, and nearby, the exceptional points is determined as well. The results extend earlier work by the first two authors who analyzed the continuous-time problem in the parametrization by the local time at $\varrho$. A novel uniqueness result concerning divisible random measures and, in particular, Gaussian Multiplicative Chaos, is derived as part of the proofs.


INTRODUCTION
This note is a continuation of earlier work by the first two authors who in [1] studied various exceptional level sets associated with the local time of random walks in lattice versions D N ⊂ Z 2 of bounded open domains D ⊂ R 2 , at times proportional to the cover time of D N . The walks in [1] move as the ordinary constant-speed continuous-time simple symmetric random walk on D N and, upon exit from D N , reenter D N through a uniformly-chosen boundary edge. The re-entrance mechanism is conveniently realized by addition to D N of a boundary vertex with all edges emanating out of D N on Z 2 now ending in . See Fig. 1 for an example.
In [1], the local time was parametrized by the time spent at . Through the use of the Second Ray-Knight Theorem (Eisenbaum, Kaspi, Marcus, Rosen and Shi [15]) this enabled a connection to the level sets of the Discrete Gaussian Free Field (DGFF) studied earlier by the second author and O. Louidor [9]. The goal of the present paper is to extend the results of [1] to the more natural setting of a discrete-time random walk parametrized by its actual time. As we shall see, a close connection to the DGFF still persists, albeit now to that conditioned on vanishing arithmetic mean over D N . As no version of the Second Ray-Knight Theorem seems available for this specific setting, we c 2019 Y. Abe, M. Biskup and S. Lee. Reproduction, by any means, of the entire article for noncommercial purposes is permitted without charge. have to proceed by suitable, and sometimes tedious, approximations. A key point is to control the fluctuations of the total time of the random walk at a given occupation time of the boundary vertex.
In order to give the precise setting of our problem, we first consider a general finite, unoriented, connected graph G = (V ∪ { }, E), where is a distinguished vertex (not belonging to V). Let X denote a sample path of the simple random walk on G; i.e., a discrete-time Markov chain on V ∪ { } with the transition probabilities where deg(u) is the degree of u. As usual, we will write P u to denote the law of X subject to the initial condition P u (X 0 = u) = 1. Given a path X of the chain, the local time at v ∈ V ∪ { } at time n is then given by Our aim is to observe the Markov chain at times when most, or even all, of the vertices have already been visited. This requires looking at the chain at times (at least) proportional to the total degree deg(V) := ∑ v∈V∪{ } deg(V). To simplify our later notations, we thus abbreviate, for any t > 0, In this parametrization, we have L V t (v) = t + o(t) with high probability as t → ∞. Our derivations will make heavy use of the connection between the above Markov chain and an instance of the Discrete Gaussian Free Field (DGFF). Denoting by H v := inf n ≥ 0 : X n = v (1.4) the first hitting time of vertex v, this DGFF is the centered Gaussian process {h V v : v ∈ V} with covariances given by (1.5) where E the expectation with respect to the law of h V and G V is the Green function. The field naturally extends to by h V = 0. We will apply the above to V ranging through a sequence of lattice approximations of a well-behaved continuum domain. The following definitions are taken from [7]: Definition 1.1 An admissible domain is a bounded open subset of R 2 that consists of a finite number of connected components and whose boundary is composed of a finite number of connected sets each of which has positive Euclidean diameter.
We will write D to denote the family of all admissible domains and let d ∞ (·, ·) denote the ∞ -distance on R 2 . The lattice domains are then assumed to obey: Definition 1.2 An admissible lattice approximation of D ∈ D is a sequence {D N } N≥1 of sets D N ⊂ Z 2 such that the following holds: There is N 0 ∈ N such that for all N ≥ N 0 we have and, for any δ > 0 there is also N 1 ∈ N such that for all N ≥ N 1 , As shown in [7, Appendix A], the conditions (1.6-1.7) ensure that the discrete harmonic measure on D N tends, under scaling of space by N, weakly to the harmonic measure on D. This yields a precise asymptotic expansion of the associated Green function; see [4,Chapter 1]. In particular, we have G D N (x, x) = g log N + O(1) for g := 1 2π (1.8) whenever x is deep inside D N . (This is by a factor 4 smaller than the corresponding constant in [4,7] due to a different normalization of the Green function.)

MAIN RESULTS
Let us move to discussing our main results. We pick an admissible domain D ∈ D and a sequence of admissible lattice approximation {D N } N≥1 and consider these fixed throughout the rest of the derivations.

Setting the scales.
We begin by setting the scales for the time that the random walk is observed for and determining the range of values taken by the local time: Theorem 2.1 Let {t N } N≥1 be a positive sequence such that, for some θ > 0, lim N→∞ t N (log N) 2 = 2gθ. (2.1) Then for any choices of x N ∈ D N , the following limits hold in P x N -probability: 3) The conclusion (2.3) indicates (and our later results on avoided points prove) that the choice θ := 1 identifies the leading order of the cover time of D N -defined as the first time that every vertex of the graph has been visited. The cover time is random but it is typically concentrated (more precisely, whenever the maximal hitting time is much smaller than the expected cover time; see Aldous [3]). The scaling (2.1) thus corresponds to the walk run for a θ-multiple of the cover time.
As it turns out, under (2.1), the asymptotic [2gθ + o(1)](log N) 2 marks the value of L D N t N at all but a vanishing fraction of the vertices in D N . In light of (2.2-2.3), this suggests that we call x ∈ D N a λ-thick point if (for λ ∈ [0, 1]) and a λ-thin point if (for λ ∈ [0, √ θ)) (2.5) One of our goals is to describe the scaling limit of the sets of thick and thin points. This is best done via random measures of the form (2.6) where a N is a sequence with the asymptotic growth as the right-hand side of (2. 4-2.5) and W N is a normalizing sequence. The specific choice of the normalization by √ 2a N reflects on the natural fluctuations of L D N t N (x) (which turn out to be order log N even between nearest neighbors) and captures best the connection to the corresponding object for the DGFF to be discussed next.

Level sets of zero-average DGFF.
Recall that h D N denotes a sample of the DGFF in D N . As shown by Bolthausen, Deuschel and Giacomin [10], the maximum of h D N is asymptotic to 2 √ g log N and so the λ-thick points are naturally defined as those where the field exceeds 2λ √ g log N. Allowing for sub-leading corrections, these are best captured by the random measure where { a N } is a centering sequence with the asymptotic a N ∼ 2λ √ g log N and 2g log N . (2.8) In [9,Theorem 2.1] it was shown that for each λ ∈ (0, 1), there is a constant c(λ) > 0 (independent of D or the approximating sequence {D N } N≥1 ) such that, relative to the topology of vague convergence of measures on D × (R ∪ {+∞}), where α := 2 √ g (2. 10) and where Z D λ is a random a.s.-finite Borel measure in D called the Liouville Quantum Gravity (LQG) at parameter λ-times critical. The measure Z D λ is normalized so that, for each Borel set A ⊆ D, where r D is an explicit bounded, continuous function supported on D that, for D simply connected, is the conformal radius; see [9, (2.10)].
As was shown in [1], the measures {Z D λ : λ ∈ (0, 1)} are quite relevant for the exceptional level sets associated with the continuous-time random walk in the parametrization by the local time spent in the "boundary vertex." Somewhat different measures will arise for the discrete-time random walk. Let Π D (x, ·) be the harmonic measure in D defined, e.g., as the exit distribution from D of a Brownian motion started at x. The continuum Green function in D with Dirichlet boundary condition is then given by G D (x, y) := −g log |x − y| + g ∂D Π D (x, dz) log |y − z|. (2.12) Writing Leb for the Lebesgue measure on R 2 , let d : R 2 → R be defined by d(x) := Leb(D) D dy G D (x, y) D×D dz dy G D (z, y) . (2.13) As is readily checked, d is bounded and continuous, vanishes outside D and integrates to Leb(D) over D. (We also have d ≥ 0 because G D ≥ 0 and also that the Laplacian of d is constant on D but that is of no consequence in the sequel.) See Fig. 2. We claim: Theorem 2.2 For each λ ∈ (0, 1) and each D ∈ D, there is a unique random measure Z D,0 λ on D such that, for any sequence {D N } N≥1 of admissible approximations of D and any centering sequence { a N } N≥1 satisfying a N ∼ 2λ √ g log N as N → ∞, where c(λ) is as in (2.9). Moreover, if Y is a normal random variable with mean zero and variance then the measure from (2.9-2.11) obeys The law of Z D,0 λ is determined uniquely by (2.16). The existence of a random measure Z D,0 λ satisfying (2.16) is part of the proof of (2.14). The uniqueness of the decomposition (2.16) holds quite generally and constitutes the main technical ingredient of the proof; see Theorem 3.1 which is of independent interest. The known properties of Z D λ (see [9,Theorem 2.3]) imply that Z D,0 λ is a.s.-finite and charges every non-empty open subset of D a.s.

Exceptional local-time sets.
We are now well equipped to state our results concerning the limits of the random measures (2.6) for a given centering sequence {a N } N≥1 growing as the right-hand sides of (2.4-2.5) and the normalizing sequence given by (2.17) For the thick points we then get:  (2.19) in the sense of vague convergence of measures on D × (R ∪ {+∞}), where Y = N (0, σ 2 D ) and Z D,0 λ are independent and c(λ) is as in (2.9).
For the thin points, we similarly obtain: Theorem 2.4 (Thin points) Suppose {t N } N≥1 and {a N } N≥1 are positive sequences such that, for some θ > 0 and some λ ∈ (0, √ θ ∧ 1), (2.1) and hold true. Then for any x N ∈ D N and for X sampled from P x N , the measures ζ D N in (2.6) with W N as in (2.17) obey λ are independent and c(λ) is as in (2.9).
The limiting spatial distribution of the λ-thick and λ-thin points (as well as the distribution of the total number of these points) is governed by the measure In light of (2.16), this is somewhere between the zero-average LQG Z D,0 λ and the "ordinary" LQG Z D λ , which appeared in the limit for the parametrization by the local time at . The second component of the measure on the right of (2.19) and (2.21) is exactly as that for the DGFF (2.9). This is due to the judicious scaling of the second component of ζ D N by √ 2a N rather than just log N, as was done in [1].
Apart from the thick and thin points, [1] studied also the sets of points where the local time is order unity, called the light points, and the points where the local time vanishes, called the avoided points. In both cases, the LQG measure that appears is for parameter λ := √ θ (and θ ∈ (0, 1)). The control extends to the discrete-time problem parametrized by the total time as well. We start with the light points: Theorem 2.5 (Light points) Suppose {t N } N≥1 is a positive sequence such that (2.1) holds for some θ ∈ (0, 1). For any x N ∈ D N and for X sampled from P x N , consider the measure Then, in the sense of vague convergence of measures on D × [0, ∞), where c(λ) is as in (2.9), Y = N (0, σ 2 D ) and Z D,0 √ θ are independent and µ := ∑ n≥0 q n δ n/4 for a sequence {q n : n ≥ 0} of non-negative numbers determined uniquely by That µ is supported on 1 4 jointly for all n ≥ 0. Noting that q 0 = 1, straightforward limit considerations show: Theorem 2.6 (Avoided points) Suppose {t N } N≥1 is a sequence such that (2.1) holds for some θ ∈ (0, 1). For any x N ∈ D N and for X sampled from P x N , consider the measure where W N is as in (2.24). Then, in the sense of vague convergence of measures on D, where Y = N (0, σ 2 D ) and Z D,0 √ θ are independent and c(λ) is as in (2.9).
The above theorems will be deduced from the corresponding statements for a continuous-time variant of X observed for a fixed time of order N 2 (log N) 2 (see Propositions 5.5, 5.9, 5.10 and 5.11). These statements are nearly identical to Theorems 2.3-2.6 above, respectively, except for the term e −α 2 λ 2 /16 in (2.19) and (2.21) that arises from the fluctuations of the (continuous-time) local time at points where the discrete-time local time is large, and the measure µ in (2.25) which gets replaced (in Proposition 5.10) by a continuous, and quite explicit, counterpart.
The fixed-time results for continuous-time random walk will be inferred from the corresponding results in [1] for the parametrization by the local time at . The main difference is that the measure (2.22) gets replaced by the "pure" LQG Z D λ .

Local structure.
Similarly as in [1], we are also able to control the local structure of the above exceptional sets. For the thick and thin points, this is achieved by considering the measures on where the third coordinate captures the "shape" of the local-time configuration near every exceptional point. In the parametrization by the local time at the boundary vertex, the asymptotic "law" of the third component in (2.31) turned out be that of the pinned DGFF (i.e., the DGFF in Z 2 {0}) reduced by a multiple of the potential kernel a. Here we note that, in our normalization, a is the unique non-negative function on Z 2 that is discrete harmonic on Z 2 {0} and obeys a(0) = 0 and a(x) ∼ g log |x| + O(1) as |x| → ∞. The pinned DGFF φ then has the covariance structure As it turns out, a different (albeit closely related) Gaussian process arises for the discretetime walk parametrized by its total time: where ζ D is the measure on the right of (2.19) and ν λ is the law of φ + αλa − 1 8 αλ1 {0} c , for φ a centered Gaussian process on Z 2 with covariances The same statement (relative to the vague topology on D × (R ∪ {−∞}) × R Z 2 ) holds for the setting of Theorem 2.4 except that ν λ is then the law of φ − αλa To demonstrate that φ is indeed closely related to the pinned DGFF φ, we note that, We will verify this relation, along with the fact that (2.34) is positive semidefinite and thus the covariance of a Gaussian process, in Lemma 8.4. The i.i.d. normals appear during a conversion from the continuous-time walk to its discrete-time counterpart. They represent the scaling limit of the fluctuations of the local time due to the random (i.i.d. exponential) nature of the jump times.
We will also address the local time structure in the vicinity of the avoided points. This is done by considering the measure on D × [0, ∞) Z 2 defined by For reasons explained earlier, the measure is concentrated on D × ( 1 4 N 0 ) Z 2 . Recall from [1, Theorem 2.8] that, for the continuous-time random walk parametrized by the local time at the boundary vertex and observed at the time corresponding to θmultiple of the cover time, the limit distribution of the local configuration is described by the law ν RI θ of the occupation-time field of random-interlacements at level u := πθ. This measure was constructed by Rodriguez [22, Theorems 3.3 and 4.2] (see [1,Section 2.6] for a summary of the construction). For the discrete-time random walk parametrized by its total time we get a discrete-time counterpart of ν RI θ : Theorem 2.8 (Local structure of avoided points) For each u > 0, there is a unique Borel measure ν RI, dis u on [0, ∞) Z 2 that is supported on ( 1 4 N 0 ) Z 2 and obeys the following: For For the setting and under the conditions of Theorem 2.6, for each θ ∈ (0, 1) we then have where κ D is the measure on the right of (2.30).
Similarly as in [1], we will not attempt to make statements concerning the local structure of the light points as that would require developing the corresponding extension of the above occupation-time measure to the situation when the local time at the origin does not vanish.

Remarks.
We proceed with a couple of remarks. First note that, along with (2.3) and the fact that Z D,0 √ θ is supported on all of D a.s., Theorem 2.6 implies that the cover time is indeed marked by the choice θ := 1. Second, note that an explicit formula for q n can be extracted from (2.26). This is achieved using the identity where I 1 (z) := ∑ n≥0 1 n!(n+1)! (z/2) 2n+1 is a modified Bessel function. Expanding e t and 1 √ t I 1 (x √ t) into power series in t and scaling t by (1 + s/4) then readily shows for each n ≥ 0. See also (4.40) for the corresponding formulas in continuous time. Third, as we will see in the proofs, the random variable Y in the measure (2.22) represents the limit of normalized fluctuations of the local time at the boundary vertex for the first t N deg(D N ) steps of the random walk (see Lemma 4.2). A key point is that this becomes statistically independent of the level-set statistics in the limit. Incidentally, through (2.28), the total mass of the measure (2.22) describes the limit law of a normalized total number of uncovered vertices at the time proportional to λ 2 -multiple of the cover time.
Fourth, the reader may wonder why we had to include the degree of into the normalization of the local time (1.3) by deg(V). This is because, although deg( ) = o(|D N |) under (1.6-1.7) (see Lemma 5.8), once the ratio of deg( )/|D N | is larger than 1/ log N (which can occur under (1.6-1.7)) removing deg( ) from the normalization changes the scaling of the normalization constants W N and W N with N.
Fifth, as in [1], the above statements deliberately avoid various boundary values of the parameters; i.e., λ = 1 for the thick points, λ = √ θ ∧ 1 for the thin points and θ = 1 for the light and avoided points. All of these are closely related to the statistics of nearlymaximal DGFF values, which is different than the regime described in Theorem 2.2. While the nearly-maximal DGFF values are now well understood thanks to the work of the second author with O. Louidor [6][7][8] and with S. Guffler and O. Louidor [5], the recent work of Cortines, Louidor and Saglietti [11] shows that the connection between the avoided points at θ = 1 (i.e., the time scale of the cover time) and the DGFF extrema is considerably more subtle.
Sixth, a natural setting for the above problem is the random walk on a lattice torus (Z/(NZ)) 2 started from any given vertex . As our work in progress shows [2], the scaling of the corresponding measures is then more complicated -and, in particular, the scaling sequences W N and W N have to be taken random. This is related to the fact that, for random walks of time-length order N 2 (log N) 2 , the local time at the starting point of the walk exhibits fluctuations of order (log N) 3/2 on the torus while these are only of order log N at the boundary vertex in our planar domains.
Seventh, we note the recent preprints of Jego [16,17], where measures of the kind (2.6) associated with the thick points of planar Brownian motion run until the first exit from a bounded domain are shown to admit a non-trivial scaling limit that is identified with the limit of multiplicative chaos measures associated with the root of the local time. In [17] the limit measure is shown to obey a list of natural properties that characterize it uniquely. It remains to be seen whether the limit measure bears any connection to Gaussian Free Field and/or Liouville Quantum Gravity.
Finally, we note that Dembo, Peres, Rosen and Zeitouni [13,14] and Okada [19][20][21] analyzed the fractal nature and clustering of the sets of thick points and avoided points in the setting of a random walk killed on exit from D N (for the thick points) and on two-dimensional torus (for the avoided points). In particular, for 0 < β < 1, the growth exponents have been obtained for with s > 0 and as well as the sets where "min" and "max" are swapped -which amounts to changing from the behavior near a typical point in the level set to a typical point in D N . These conclusions cannot be gleaned from our results because N −1+β vanishes as N → ∞. Notwithstanding, the obtained exponents coincide with those for the DGFF thick points computed by Daviaud [12] and thus affirm the universality of the DGFF.

Outline.
The rest of this paper is organized as follows. In Section 3 we derive the scaling limit for the level sets of zero-average DGFF. Section 4 extends the conclusions of [1] on the local time parametrized by the local time at to include information on fluctuations of the total time of the walk. This naturally feeds into Section 5 where we establish the scaling limit of exceptional points for the local time of the continuous-time random walk in the parametrization of the total time. Section 6 then controls the effect of starting the walk at an arbitrary point. In Section 7 we then prove our main theorems above concerning the discrete-time walk except for the local behavior, which is deferred to Section 8.

ZERO AVERAGE DGFF LEVEL SETS
We are now ready to commence the proofs. As our first item of business, we will address Theorem 2.2 on the level sets of the zero-average DGFF. Our strategy is to derive the statement from the unconditional convergence (2.9). This leads to a convolution identity whose resolution requires a uniqueness statement that pertains to the whole class of Gaussian Multiplicative Chaos measures: We remark that for the needs of the present paper it would suffice to treat the case when the sum in (3.1) consists of only one non-zero term. However, this still constitutes the bulk of the proof and so we include the more general case as it is interesting in its own right. The result extends (with suitable modifications) even to the case when Φ is a generalized Gaussian Field; the statement thus "reverse engineers" the base measure from the associated Gaussian Multiplicative Chaos. Our setting goes even somewhat beyond that of, e.g., Shamov [24] as we make no moment assumptions on M D and M D .
The proof of Theorem 3.1 hinges on the following technical observation: Then φ is continuous on its domain and smooth on the interior thereof. Moreover, φ satisfies the heat equation, Proof. The continuity of φ on R × [0, ∞) follows by the Bounded Convergence Theorem. Using that √ tY = N (0, t) and invoking Tonelli's Theorem we get As y → φ(y, 0) is bounded, φ is continuously differentiable on R × (0, t). Since the density of N (0, t) solves the heat equation (3.4), the Dominated Convergence Theorem ensures that so does φ.
We are now ready to give: Proof of Theorem 3.1. Let us first assume that Φ takes the form h(x)Y for some bounded measurable h : D → R and Y = N (0, 1) independent of M D and M D . Assume that Given any bounded and measurable f : is non-negative and measurable, from (3.6) we then have In light of Lemma 3.2, the difference φ − φ is a bounded solution to the heat equation in A key point is that the heat equation is known to exhibit backward uniqueness. More precisely, Seregin andŠverák [23, Theorem 4.1] implies that every bounded solution to (3.4) that vanishes at a given positive time vanishes everywhere. Since (3.7) implies that φ − φ vanishes at "time" t = 1, Since f was arbitrary, the claim thus holds for any Φ of the form h(·)Y.
To address the general case, we proceed as in Kahane [18] (see [4, Section 5.2] for a review). First note that by (3.1) we may write where (Y 0 , . . . , Y n ) are i.i.d. standard normal and where Φ n is an independent centered Gaussian field with covariance The argument for Φ of the form h(·)Y then shows, inductively, that (3.2) implies Letting f : D → [0, ∞) be measurable and supported in a compact set A ⊂ D, the assumption of locally-uniform convergence in (3.1) implies that, given > 0 there is n ∈ N such that Var(Φ n (x)) ≤ for all x ∈ A. This also gives Cov(Φ n (x), Φ n (y)) ≤ for all x, y ∈ A and so Kahane's convexity inequality along with Jensen's inequality show, Taking ↓ 0 and noting that this implies Y → 0 in probability then shows, with the help of the Bounded Convergence Theorem, By symmetry, equality must hold for all f as above and so M D law = M D , as desired.
Equipped with Theorem 3.1, we are ready to give: Proof of Theorem 2.2. Abbreviate Then Y N is normal with mean zero and variance which, we note, has zero average over D N . Hence, if we define the zero-average variant of η D N by and so we may and will henceforth focus on the limit of η D,0 N .
Next we note that we may realize (3.23) as an a.s. equality. This is because (3.23) implies, for any measurable due to the fact that equality in law to a constant implies equality a.e. We conclude that the measure is equidistributed to Z D λ . Replacing Z D λ by this measure then gives us equality a.s.
Once we have (3.23) as an a.s. equality, and Z D λ thus as a measurable function of η D,0 and Y, we apply a routine change of variables to get ⊥ ⊥ Y and thus proves existence of the decomposition (2.16). Since the decomposition is unique by Theorem 3.1 and the fact that d is bounded and continuous, the law of Z D,0 λ does not depend on the subsequential limit η D,0 . It follows that all subsequential limits of {η D,0 N : N ≥ 1} are equal in law and so we get the convergence statement (2.14) as well.
Our use of Theorem 2.2 will invariably come through:  .14), Proof. By (3.19) and the fact that Y n → Y in law and d N → d uniformly shows where η D,0 is as in (3.26) and obeys Y ⊥ ⊥ η D,0 . Invoking (3.27), the claim follows by a routine change of variables.

AUGMENTED BOUNDARY VERTEX MEASURES
We will now move to the discussion of local time level sets. Our proofs build on the conclusions derived in [1] for the local time parametrized by its value at the boundary vertex . In order to transfer these conclusions to the setting of a fixed total time, we will need to control the fluctuations of the total local time at a fixed local time at . Our first step is thus to augment the results of [1] by information about these fluctuations. We will again introduce the corresponding quantities on a general finite connected graph with vertex set V ∪ { }. Consider a joint law of paths X of the discrete-time random walk on V ∪ { } and an independent sample t → N(t) of a rate-1 Poisson process. The continuous-time walk is then defined as The local time naturally associated with X is given by Denotingτ (t) := inf{s ≥ 0 : L V s ( ) ≥ t}, the local time parametrized by its value at is defined as Note that, in particular, we have L V t ( ) = t for all t ≥ 0. The same is true about the expected value at any vertex; i.e., E L V t (v) = t for all v ∈ V. At a given t ≥ 0, the total (continuous) local time of the walk is computed by adding then denotes the normalized (empirical) fluctuation of the total local time. (Note that v = can be freely added to the sum as L V t ( ) = t.) To explain the specific choice of the normalization, we recall the following result from Eisenbaum, Kaspi, Marcus, Rosen and Shi [15] (with improvements by Zhai [25, Section 5.4]): and Using the stated coupling, we readily compute Note that the first term is the average of the fieldh V .
In what follows, the role of V will be taken by the sets D N and by the "boundary vertex." We let h D N be the DGFF on D N and, given a sequence {t N } N≥1 and for the continuous-time random walk started at , leth D N be the DGFF such that (4.5-4.6) with t := t N holds. We then set and denote We start by noting: In particular, where σ 2 D is as in (2.15).
We are now ready to state and prove convergence theorems for processes associated with exceptional level sets of the boundary vertex local time L D N t N augmented by information about T N . Starting with the thick and thin points, given positive sequences {t N } N≥1 and {a N } N≥1 , define where W N is as in (2.17). For the thick points of L D N t N , we then have: Proposition 4.3 (Thick points) Suppose that {t N } N≥1 and {a N } N≥1 are such (2.1) and (2.18) hold for some θ > 0 and λ ∈ (0, 1). Then for X sampled from P , relative to the vague convergence of measures on D × (R ∪ {+∞}) × R, We will rely heavily on the proof of [1, Theorem 2.2] but, due to a different normalization of the second coordinate in (4.14) and also the fact that the limit measure is different than in [1], we need to recount the main steps of the proof. Throughout we will assume (for each N ≥ 1 and each t := t N ) a coupling of L D N t N and an independent DGFF h D N to a DGFFh D N satisfying (4.6).
where we now normalize the third coordinate differently than in [1], obey N be the process (2.7) associated with the fieldh D N and the scale function that, by (2.1) and (2.18) where o(1) → 0 as N → ∞ in probability. The calculation in the proof of [1, Lemma 5.4] (enabled by the fact that the field h D N will be typical at most points contributing to ζ D N , as shown in [1, Lemma 5.2]) then gives Using Corollary 3.3 on the left-hand side of (4.20), from (4.21) and (4.18) and, one more time, [1, Lemma 5.2] we conclude that every subsequential limit ξ of the measures in (4.16) satisfies the convolution-type identity jointly for all f ∈ C c (D × R × R). It remains to "solve" (4.23) for ξ. First we note that the Monotone Convergence Theorem extends (4.23) to all f of the ), a calculation then shows The identity (4.23) also implies that ξ A,b , 1 [0,∞) < ∞ a.s. and gives where the equality now holds pointwise a.s. because once ξ A,b , 1 [0,∞) * e > 0 (which is necessary for the left-hand side to be non-zero), the ratio equal in law, and thus pointwise, to the integral on the right. Denoting µ λ (dh) := e −αλh dh, a routine change of variables rewrites (4.27) as where C is a random constant that is finite thanks to β > αλ. By [1, Lemma 5.5], there is at most one Borel measure ξ A,b on R satisfying (4.28) and, in fact, where, by plugging this in (4.23), The integral equals the root of ( The claim follows. We proceed with the corresponding result for the thin points: Proof. The proof is very similar to that of Proposition 4.3 so we indicate only the needed changes. We will again rely on the coupling of L D N t N and two DGFFs h D N andh D N such that (4.5-4.6) for t := t N hold. Let η D N to denote the process associated withh D N and the centering sequence − a N , where The argument now proceeds very much like for the thick points. We consider the extended measures (4.17), which are tight by [1,Corollary 4.8] and show, with the help of [1, Lemmas 6.1, 6.2] and (4.33), that every subsequential limit ξ thereof obeys where f * g is still defined via (4.24) but with The identity (4.34) readily extends to all f of the form f (x, , t) : where, this time, The integral equals the root of ( Next we move to the discussion of the light and avoided points. Starting with the light points, we define where W N is as in (2.24). We then get: Proof. Assuming again the coupling from (4.5-4.6), we set The family {ξ N : N ≥ 1} is tight by [1,Corollary 4.6] and so we may consider a subsequential limit ξ thereof. By [1, Lemma 7.1], the extended measure Leb along the same subsequence. We now pick a test function and observe that (4.6) implies Writing this in terms of the above measures, Lemma 4.2 gives where η D N is the DGFF process associated with the scale sequence a N : By the Monotone Convergence Theorem, this extends to all f of the form Since the Laplace transform of a measure, if exists, determines the measure uniquely, this proves that ξ takes the product form . A calculation shows that the measure (4.40) has this property.
A direct consequence of our control of the light points is: Proposition 4.6 (Avoided points) Suppose {t N } N≥1 is such that (2.1) holds for some θ ∈ (0, 1) and let Then, for the random walk distributed according to P , in the sense of vague convergence of measures on D × R, Proof. The proof of [1, Theorem 2.5] carries over essentially verbatim.

FIXED TOTAL TIME
Equipped with the enhanced limit results that include the limit value of suitably-normalized fluctuations of the total local time, we now proceed to derive from these the corresponding conclusions for a fixed total time. We keep working with the random walk started at the boundary vertex ; general starting points will be dealt with in Section 6.

Time conversions.
The transition from a fixed local time at to a fixed total time is based on a simple inversion formula. Recall that, in our context, . Besides their approximate nature, any use of these identifications are complicated by the appearance of the random time t N for which we have no better formula than (5.2). We will thus base the time conversion on a slightly different (still random) quantity that will turn out to be better adapted to our needs.
Recall the definition of T N from (4.8). We note that this actually coincides with the value of T N (t N ), where (in accord with (4.4)) we set We then have: Then there exist constants c 1 > 0 such that and thus, in particular, hold true with P -probability at least 1 − c 1 b −1 N . The proof will be split into several intermediate results, some of which will be useful later as well. The first item to note is the "stability" (or slow variation) of the fluctuation of the total local time: There exists a constant c 2 > 0 such that for all s, t ≥ 0 and all r > 0, Proof. Note that U N is a compensated compound Poisson process. In view of stationarity, it suffices to consider the case s = 0. Moreover, since U N is a martingale, Doob's maximal inequality is applicable and hence It suffices to show that Var P (U N (t)) is bounded by Ct for some C > 0. To this end, we note that t → (U N (t) + t) is a compound Poisson process with rate deg( ) and jump size distributed as ∑ x∈D N (x)/|D N |, where (·) is the local time for a single excursion. Hence, (5.9) The last expectation can be computed via the Kac moment formula, The uniform bound G D N (x, y) ≤ g log N |x−y|+1 + c shows that the sum is at most a constant times |D N | 2 , uniformly in N ≥ 1.
The next lemma quantifies the difference betweenτ (t N ) and deg(D N )t N : be as in the statement of Proposition 5.1. Then there exists a constant c 3 > 0 such that . The proof is a straightforward application of Chebyshev's inequality together with some variance estimates. We begin by noting thatτ (t N ) − deg(D N )t N is the first time to hit starting from the point X deg(D N )t N . Writing H for the first hitting time of , the Markov property tells As in the proof of the previous lemma, applying the Kac moment formula shows for some absolute constant c 4 > 0. (This also conforms to the knowledge that the length of a typical excursion on D N is comparable to the volume of D N .) Then by the Chebyshev inequality, where the last step follows from deg(D N ) = deg( ) + 4|D N |. Also, by the computation similar to the previous proof, we get for some constant c 5 > 0. So again, by Chebyshev's inequality, (5.17) Combining (5.14), (5.16), and (5.17) we find that there exists a constant c 3 > 0, depending only on (t N ) N≥1 and (b N ) N≥1 , such that all of By the monotonicity ofτ , these altogether imply (5.11) as required.
Next we will quantify the difference between t N and t • N : Lemma 5.4 Assume t N ≥ 1 and let (b N ) N≥1 be as in the statement of Proposition 5.1. Then there exists a constant c 6 > 0 such that holds with P -probability at least 1 − c 6 b −1 N .
Proof. We note that, by (4.3) and the fact that deg(x) = 4 for x ∈ D N , Rearranging the identity in terms of t, we get This will be used to prove the desired bound. Plugging t := t N , we notice that the righthand side of (5.22) almost looks like the definition (5.4) of t • N , except that we need t N in place ofτ (t N )/ deg(D N ) and U N (t N ) in place of U N (t N ). This amounts to estimating their respective differences, and this is where the previous lemmas come handy.
First, we plug s : Combining this with Lemma 5.3, we can find c 7 > 0 such that both (5.11) and hold with P -probability at least 1 − c 7 b −1 N . Moreover, given (5.11) and (5.24), we also get Putting this together, we get Although this bound is slightly larger than that appearing in the statement, we can repeat all the above argument with {b N /2} N≥1 in place of {b N } N≥1 , then the desired claim follows with c 6 = 2c 7 .
We are now ready to prove the main statement: Proof of Proposition 5.1. Let (b N ) N≥1 be as in the statement. Then by the definition of t N and Lemma 5. and are satisfied with P -probability at least 1 − O(b −1 N ). Then using (5.24) and repeating the argument as in the previous proof, we can bound (

Continuous-time exceptional level sets.
We are now ready to adapt the convergence theorems for the exceptional level-set measures for the boundary-vertex local times L D N to those associated with the local time L D N of the continuous-time walk X run for a fixed time of order N 2 (log N) 2 . We begin by the thick points; the arguments will be readily adapted to the other families of exceptional points as well. Given two positive sequences {t N } N≥1 and {a N } N≥1 as before, define where W N is the same as in the case of ζ D N . Then Proposition 5.5 (Continuous-time thick points) Under the setting and notation of Theorem 2.3 and for the walk started at the "boundary vertex," we have We then have: There is a constant c 7 > 0 such that the following holds for all N ≥ 1: Proof. The bound (5.32) follows from Proposition 5.1, Lemma 5.2 and the fact that T N has asymptotically a Gaussian tail. To get (5.33), note that for |u| ≤ b N √ t N , is bounded, this is at most order b N /t 1/4 N . The argument to follow will be based on dividing the event E N depending on the values of T N . For this we fix an > 0, and let {ρ k } k∈Z be a family of continuous functions such that We also define two auxilliary time sequences {t + N,k } N≥1 and {t − N,k } N≥1 by We then have:

Lemma 5.7
For each M > 0 there is N 0 ∈ N such that for all N ≥ N 0 and all k ∈ Z with |k| ≤ M, the following holds on E N ∩ {T N ∈ supp(ρ k )}: The bound (5.37) is then implied by (5.33). For (5.38) we note that, on The bound (5.38) then follows from the inequalities in (5.31) and the monotonicity of t → L D N t (·). The inequalities (5.38) thus naturally make us consider the level-set measures ζ D N along different choices of time sequences than the base sequence {t N } N≥1 . We will explicate the dependence on the time sequence by writing ζ D N (t N ) whenever it is along {t N } N≥1 rather than {t N } N≥1 , and likewise, we will write W N (t N ) for the normalizing constants along {t N } N≥1 . Next we note: Lemma 5. 8 We have deg( )/ deg(D N ) → 0 as N → ∞. In particular, for each k ∈ Z, With deg( )/ deg(D N ) → 0 settled, the asymptotic (5.41) is now checked readily from the definition of t ± N,k . The bounds in (5.42) follow similarly from the explicit formula for W N and some routine estimates.
We are now ready for: is a deterministic sequence tending to zero uniformly in k ∈ Z with |k| ≤ M. Relying first on the lower bound of (5.44), we now estimate where in the last step we used (5.37). The key point is that, dropping the indicator of E N , the k-th term in the sum is now a continuous function of the process ζ D N (t − N,k ) and the time T N (t − N,k ). In light of (5.41), Proposition 4.3 gives with T = N (0, σ 2 D ) independent of Z D,0 λ . Dropping the restriction to |k| ≤ M, the N → ∞ limes superior of the sum on the extreme right of (5.46) is then at most E(e −e −2αλ e −αλT ζ D , f ). Since osc M, (r) → 0 as r ↓ 0, taking N → ∞ followed by M → ∞ and ↓ 0 shows where the two "error" terms on the left-hand side of (5.46) tend to zero in the stated limits thanks to Lemma 5.6 and the Gaussian (asymptotic) tail of T N . The argument for a corresponding lower bound is very similar; we need to work with t + N,k instead of t − N,k and use explicit estimates to get rid of the indicator 1 E N and the restriction to the range of k in the sum. As a conclusion, we get for any function f as above. This is sufficient to give ζ D N law −→ e −αλT ζ D , as desired.
For the thin points we now get: Proposition 5.9 (Continous-time thin points) Under the setting and notation of Theorem 2.4 and for the walk started at the "boundary vertex," we have where T and Z D,0 λ are independent with T ∼ N (0, σ 2 D ).
Proof. The argument is similar to that for the thick points: We need to work with compactly-supported, continuous test functions f : D × (R ∪ {−∞}) → [0, ∞) that are non-increasing in the second coordinate. The change in monotonicity effectively swaps the inequalities in (5.44) and, due to a sign change in (5.42), also that in the exponent of e −αλT N (t ± N,k ) . We also need to rely on Proposition 4.4 instead of Proposition 4.3. We leave further details to the reader.
Moving to the light points, we define and state:

Proposition 5.10 (Continuous-time light points)
Under the setting and assumptions of Theorem 2.5 and for the walk started at the "boundary vertex," we have andμ is the measure in (4.40).
Proof. Relying on our convention concerning different time sequences, we start by noting The rest of the argument for the thick points (with Proposition 4.5 instead of Proposition 4.3) can now be applied to get The claim now follows by a density argument.
Finally, for the avoided points we set and state: Proposition 5.11 (Continuous-time avoided points) Under the setting and assumptions of Theorem 2.5 and for the walk started at the "boundary vertex," we have Proof. Given a continuous f : D → R, the identity (5.55) applies with ϑ D N , resp., ϑ D N replaced by κ D N , resp., κ D N . The argument then proceeds as for Proposition 5.10.

ARBITRARY STARTING POINTS
As our next item of business, we augment the continuous-time conclusions from the previous section to allow the random walk to start at an arbitrary point of D N . The formal statement is the content of: Theorem 6.1 (Arbitrary starting points) The statements of Propositions 5.5, 5.9, 5.10 and 5.11 apply for random walk starting from an arbitrary point x N ∈ D N .
We will start with the thick points as that is the hardest case. Assume that {a N } N≥1 and {t N } N≥1 satisfy the conditions of Propositions 5.5. The integrals of { ζ D N : N ≥ 1} from (5.29) against f ∈ C c (D × (R ∪ {+∞})) are tight random variables. Our strategy is to use the strong Markov property after the first hitting of the "boundary vertex." For this let us recall that H x denotes the first hitting time of vertex x and let θ t denote the shift on the path space acting as ( X • θ t ) s = X t+s . We will write {( L D N • θ t ) s : s ≥ 0} for the local time process associated with the time-shifted path {( X • θ t ) s : s ≥ 0}. Our first observation is then: In particular, under the conditions of Proposition 5.5, for any f ∈ C c (D × (R ∪ {+∞})) that is non-decreasing in the second variable and any x N ∈ D N , Proof. The relation (6.1) is a direct consequence of the additivity of the local time. As to (6.2), for f as above and any m > 0 with t N > m, dropping the term L D N H while noting The strong Markov property then gives Since the random walk on D N coincides with the random walk on Z 2 until time H , the Central Limit Theorem shows that the probability tends to zero in the limits N → ∞ and m → ∞. The expectation on the right converges by Proposition 5.5.
Our next goal is to prove a complementary bound to (6.2) for the limes inferior. For this we must control the effect of the first term on the right of (6.1). Writing {( L D N • θ t ) s : s ≥ 0} for the local time of the process X • θ t parametrized at the time spent at the boundary vertex, we then have: Proof. Let us for simplicity assume (e.g., by redefining a N ) that b = 0. The strong Markov property bounds the probability under the sum by (6.6) We start by estimating the second term. Denoting p := P z (Ĥ z < H ) whereĤ z is the first return time to z, we have L D N H (z) law = 1 4 ∑ N i=1 τ i for N := Geometric(p) and τ 1 , τ 2 , . . . i.i.d. Exponential(1) independent of N. For any q ∈ (0, 1), the Chernoff bound gives (1−p) . (6.7) for all q ∈ (0, 1). Using (6.8) in conjunction with the uniform estimate G D N (z, z) ≤ g log N + c, we dominate the part of the sum in (6.6) for m satisfying (m In the complementary regime, we have a N − (m + 2)G D N (z, z) > t N which permits us to estimate the last term on the right of (6.6) via [1,Lemma 4.1] with the choices a := a N , t := t N and b := (m + 2)G D N (z, z) to get , 1) and proceed as follows: For , the prefactor is order log N W N /N 2 but, thanks to the uniform upper bound on G D N (z, z), the sum of the exponential terms decays polynomially with N. For m with (m + 1)G D N (z, z) ≤ 1 2 (a N − t N ), the prefactor is order W N /N 2 and the sum of the exponentials is bounded.
Combining the above estimates, the sum in (6.5) is bounded by a quantity of order Interpreting H as the first exit time of the simple random walk on Z 2 from D N , the sum on the right is non-decreasing in D N . We may thus assume that D N is a box of sidelength 2 n , for n = log 2 N + O(1), centered at x. For the probability under the sum we then get, for each k = 0, . . . , n − 1 and some constant c > 0, The sum in (6.10) is thus at most of order 1 + ∑ n k=0 n−k n 2 2k which is of order N 2 / log N. The claim follows.
We are now ready to give: Proof of Theorem 6.1, thick points. Consider a non-negative f ∈ C c (D × (R ∪ {+∞}) that is non-decreasing in the second variable and supported in D × [b, ∞) for some b ∈ R. Note that {H < ∞} is a full probability event under P x . Decomposing the support of ζ D N according to whether the point was hit before hitting the boundary vertex or not, the monotonicity of t → L D N t and the assumed monotonicity of f yield Fix a sequence b N → ∞ such that b N /t 1/4 N → 0 and let F N be the event that the inequalities in (5.6) hold. Fix any m > 0 and > 0. Let G N be the event that the second term on the right of (6.12) is less than . Then As P x (H < ∞) = 1, the strong Markov property gives (6.14) Proposition 5.1 and the fact that {T N : N ≥ 1} is tight now ensures that the probability on the right tends to zero in the limits N → ∞ and m → ∞.
Concerning the probability on the right of (6.13), an inspection of (5.4) shows that, on By the Markov inequality, the probability in (6.13) is thus bounded by −1 f ∞ /W N (t N ) times the sum in Lemma 6.3 albeit with t N replaced by t N : is bounded by an m-dependent constant uniformly in N, the probability in (6.13) is thus O(1/ log N) uniformly in x ∈ D N . Taking N → ∞ followed by m → ∞ and ↓ 0 shows Combining with (6.2), we then get the desired claim.
The situation for the thin, light and avoided points is similar albeit simpler. Writing ξ D N for the corresponding continuous-time point measure (parametrized by the total time), as in Lemma 6.2, the identity (6.1) gives us an easy one-way bound, where the test function f takes values in D × (R ∪ {−∞}) for the thin points, D × [0, ∞) for the light points and D for the avoided points: Lemma 6.4 Under the conditions of Proposition 5.9, 5.10 and 5.11, for any any x N ∈ D N and any continuous, compactly-supported, non-negative test function f on the corresponding domain that, for the thin and light points, is non-increasing in the second variable, where we now rely on the fact that t → W N (t), resp., t → W N (t) are non-increasing for t near t N . The inequalities (6.4) then become The claim now follows by taking N → ∞ followed by m → ∞.
In replacement of Lemma 6.3, we then need: Lemma 6.5 Under the conditions of Proposition 5.9, for each b ∈ R there is c > 0 such that for all N ≥ 1 and all x ∈ D N , Under the conditions of Propositions 5.10 and 5.11 the same holds with a N + b log N replaced by b ≥ 0 (including, for the avoided points, b = 0) and W N replaced by W N .
Proof. The Strong Markov property and the estimates from [1,Corollary 4.8] bound the probability in (6.20) by P x (H z < H ) times and so the quantity in (6.20) is at most order W N N −2 ∑ z∈D N P x (H z < H ). The argument then concludes as in the proof of Lemma 6.3. For the light and avoided points, we instead invoke [1, Corollary 4.6] and proceed analogously.
With this we get: Proof of Theorem 6.1, thin, light and avoided points. We proceed similarly as for the thick points. First, writing a N := a N + b log N for the thin points and a N := b for the light and (with b := 0) avoided points, given a continuous, compactly-supported f that is non-increasing in the second variable, in all three cases of interest we have Let F N be the event from (5.6) with t N replaced by t N − m. Abusing our earlier notation, given > 0, let G N be the event that the second term (without the minus sign) is at most . From (6.22), we then get Thanks to the Central Limit Theorem, the tightness of {T N : N ≥ 1} and Proposition 5.1, the two probabilities on the left-hand side of (6.23) tend to zero in the limits N → ∞ and m → ∞, uniformly in x ∈ D N . For the probability on the right we observe that, on 2t N . Lemma 6.5 and the Markov inequality then bound the probability by an m-dependent constant times 1/ log N, uniformly in x ∈ D N . Combining these observations we thus get In conjunction with Lemma 6.4 this proves the claim.

DISCRETE TIME CONCLUSIONS
We will now move to the proof of our main results except those on the local structure which are deferred to Section 8. Considering, for a moment, a random walk on a general finite, connected graph on V ∪ { }, recall that the discrete-time local time L V t is parametrized by the total number of steps in units of deg(V) = ∑ u∈V∪{ } deg(u) while its continuous-time counterpart L V t is parametrized by the total time. Both of these are naturally realized on the same probability space through the definition (4.1) of X via the discrete-time walk X and an independent (rate-1) Poisson point process N(t). A key technical tool in what follows is the following lemma: Lemma 7.1 There is a family of i.i.d. exponentials {τ j (v) : j ≥ 1, v ∈ V} with parameter 1 independent of X (but not of N) such that holds P x -a.s. for each t ≥ 0 and each x ∈ V ∪ { }.
Proof. This is a consequence of the standard representation of the wait times of X by independent exponentials. (In this representation, the process N is a function of the exponentials and X, albeit independent of X.) Note that the equality (7.1) fails at X t because the walk is "in-between" jumps there.
Moving back to the random walk on D N ∪ { }, this readily yields: Then for any x N ∈ D N , Proof. The Central Limit Theorem ensures that ( N(t) − t)/ √ t tends in law to a standard normal as t → ∞. As t N = o(deg(D N )), the inequalities are satisfied with probability tending to one as N → ∞. Once (7.4) is in force, the monotonicity of t → L D N t and (7.1) show that the event F N (x) occurs at all x ∈ D N except perhaps at the position of X at times (t N ± 1) deg(D N ).
With these observations in hand, we are now ready to finally present the proofs of our main theorems. The easiest case is that of avoided points: Proof of Theorem 2.6. Note that, whenever F N (x) occurs, (7.5) As {t N ± 1} N≥1 have the same leading-order asymptotic as {t N } N≥1 , the random variables κ D N (t N ± 1), f have the same weak limit as κ D N , f . Since W N → ∞ and also the claim follows from Lemma 7.2, Proposition 5.11 and Theorem 6.1.
Next we tackle the light points: Proof of Theorem 2.5. Denote and consider the auxiliary point measure Thanks to Lemma 7.2, on the event ∑ x∈D N 1 F N (x) c ≤ 2, the inequality (7.5) holds for any non-negative f ∈ C c (D × [0, ∞)) that is non-increasing in the second variable and with κ D N , resp., κ D N replaced by ϑ D N , resp., ϑ D N . As, by Proposition 5.10 and Theorem 6.1, ϑ D N tends in law to the measure ϑ D on the right of (5.53), we have for any non-negative f ∈ C c (D × [0, ∞)).
Next we observe that, by that fact that for any > 0 and any random variable Y taking values in [0, ], the fact that the random variables {τ j (x) : j ≥ 1, x ∈ D N } are independent of the random walk and independent for different x ∈ D N implies Markov's inequality then shows f * e 2M (x, n/4) ≥ 1 2 1 [0,M] (n/4) and, therefore, 16) The existence of the limit (7.14) then implies tightness of {ϑ D N (D × [0, M]) : N ≥ 1} for all M > 0, and thus tightness of {ϑ D N : N ≥ 1} as well. The tightness of {ϑ D N : N ≥ 1} permits us to extract a weak subsequential limit ϑ D along a (strictly) increasing sequence {N k : k ≥ 1} of naturals. This entails the conver- . We claim that we even have . (This is not automatic because f * e is not compactly supported in general.) First we note that straightforward comparisons with the Lebesgue measure show, for each M > 0, Writing n for the ratio of the two probabilities, for f supported in D × [0, M] we have | f * e | ≤ f ∞ f * e M and so, by (7.15), It follows that the part of the integral ϑ D N , f * e corresponding to the second coordinate in excess of n is at most n f ∞ times ϑ D N , f * e 2M , which is tight by (7.14). We can thus approximate f * e by a function supported in D × [0, n] and pass to the limit N → ∞ followed by n → ∞. This gives (7.17) as desired.
Combining (7.14) with (7.17) we arrive at the convolution identity We have proved this (including the absolute convergence of the integral on the left-hand side) for f ∈ C c (D × [0, ∞)) but the Monotone Convergence Theorem along with the fact that the second coordinate of ϑ D has subexponentially growing density extends this to Assumingf > 0, the explicit form of the right-hand side shows that ϑ D , g s / ϑ D , g 1 is well-defined and equal to a non-random quantity -namely, the ratio of two Laplace transforms ofμ. This turns (7.23) into the pointwise identity valid, a.s., for each s > 0 and (by elementary extensions) allf ∈ C(D). Thanks to the monotonicity of both sides in s and almost-sure continuity inf of both sides with respect to the supremum norm, the identity actually holds a.s. for all s > 0 and allf ∈ C(D) simultaneously.
With (7.24) in hand, we are more or less done. Indeed, as the left-hand side is a generating function of the sequence { ϑ D,n ,f } n≥0 , which determines the sequence uniquely, all ϑ D,n ,f must be the same deterministic multiple of the quantity in the large parentheses on the right-hand side. This shows that ϑ D must be as on the right-hand side of (2.25) for some µ of the form µ = ∑ n≥0 q n δ n/4 where {q n } n≥0 is uniquely determined by ∑ n≥0 q n (1 + s/4) −n = ∞ 0μ (dh)e −sh , s > 0. (7.25) The Laplace transform ofμ was calculated in the proof of Proposition 4.5. All subsequential limits of {ϑ D N : N ≥ 1} are thus equal in law and so convergence holds. Moving to the thick points, we first need a version of (7.18): (1), all k ∈ N and all reals s ≥ t ≥ 0, Proof. Since ∑ k j=1 τ j has density 1 (k−1)! x k−1 e −x , the change of variables y := x + t gives Using that s ≥ t, the prefactor can be written as the exponential of (7.28) Noting that right-hand side is no less than st k+s+t , we get the claim. A convolution identity that inevitably shows up in the proof also requires: Lemma 7.4 Suppose ν is a Borel measure on R such that, for some β ∈ R and some σ 2 > 0 and all f ∈ C c (R), Then ν(dh) = e − 1 2 β 2 σ 2 +βh dh. (7.30) Proof. Consider the measure ν(dh) := e −βh+ 1 2 β 2 σ 2 ν(dh). Absorbing the exponential term on the right of (7.29) into the test function, a calculation shows for all f ∈ C c (R). As C c (R) generates all Borel functions in R, we get This can be interpreted by saying that ν(dh) : (7.33) The right-hand side is the Laplace transform of N (βσ 2 , σ 2 ) and so, since the Laplace transform of a measure, if exists, determines the measure uniquely, ν is the law of N (βσ 2 , σ 2 ). Hence ν is the Lebesgue measure, thus proving the claim.
Proof of Theorem 2.3. The proof starts by adapting the argument leading to (7.14). Indeed, working again in the coupling of the random walk X and the i.i.d. exponentials where L D N t N (x) is the quantity from (7.7). Lemmas 7.1-7.2 along with Proposition 5.5, Theorem 6.1 and (7.10) then show for every f ∈ C c (D × (R ∪ {+∞})), where ζ D is the measure on the right of (5.30). Writing {τ j : j ≥ 1} for generic i.i.d. exponentials with parameter 1 and denoting, with some abuse of earlier notation, Assuming h ≥ 2M with M > 0 large, Markov's inequality along with E((τ j − 1) 2 ) = 1 then gives For M large, the right-hand side is at most 1/2 thus showing In particular, we may extract a weak subsequential limit ζ D .
We would like to use the existence of weak subsequential limits to pass to the limit N → ∞ inside the integral on the left-hand side of (7.38). For that we need to deal with the fact that the support of f N, * e extends to −∞ in the second variable. Pick any b > 0 and, for any h < −3b, invoke Lemma 7.3 with the choices s := 4 √ 2a N (−2b − h), t := 4 √ 2a N b and k as above to conclude that The prefactor decays to zero as h → −∞ uniformly in N ≥ 1 and so, plugging this into Taking M → ∞ after N → ∞ we then readily conclude that every subsequential weak limit ζ D of {ζ D N : N ≥ 1} satisfies the distributional identity ). This includes the fact that the integral on the left-hand side converges absolutely for all such f . We are now more or less done. Indeed, note that the explicit form of ζ D gives, for f ∈ C c (R) and A ⊆ D Borel with Leb(A) > 0, The right-hand side is non-random and so (7.48) becomes the pointwise equality for allf ∈ C c (R). This shows that, for any B ⊆ R Borel, where ν is a Borel measure on R that obeys (7.29) with β := −αλ and σ 2 := 1/8. Lemma 7.4 then gives ν(dh) = e −α 2 λ 2 /16−αλh dh and, since the first measure on the right of (7.51) has the law of the spatial part of ζ D , we get The claim follows.
Finally, we deal with the changes that are required for the thin points: Proof of Theorem 2.4. Following the proof of Theorem 2.3, the argument is exactly the same up to (7.38), except that now f ∈ C c (D × (R ∪ {−∞})). For the tightness, we then need to consider  (1), all k ∈ N and all s, t ≥ 0 with s + t < k, To use this, let b > 0 and invoke the choices s := (h − 2b)4 √ 2a N , t := 4b √ 2a N and k as above while noting that, for N large and h > 2b, we have s + t < k, to get This again permits us to truncate the tails and derive (7.48) for each f ∈ C c (D × (R ∪ {−∞})) and each weak subsequential limit ζ D of {ζ D N : N ≥ 1}. The rest of the proof of Theorem 2.3 can be followed literally leading to (7.52), as before.
It remains to give: Proof of Lemma 7.5. The explicit form of the density along with the substitution y := x + t again shows Since the random walk started at visits any given x N ∈ D N in time of order N 2 log N while the walk started at x N hits in time of order N 2 with high probability, shifting t N by ±(log N) 3/2 and invoking the monotonicity of t → L D N t extends [1, Theorem 2.1] to arbitrary starting points. The inequalities (7.4) then extend it to the discrete-time object L D N t N as well.

LOCAL STRUCTURE
The last item to be addressed are the proofs of Theorems 2.7 and 2.8 dealing with the local structure of the local time field near thick/thin and avoided points, respectively. We will start with the former setting, as it is technically most demanding.

Thick and thin points.
We will again carry the argument primarily for the thick points and only comment on the changes for the thin points. Assuming henceforth the setting and notation of Theorem 2.3, we start by converting the continuous-time in the boundary-vertex parametrization to that parametrized by the total time.  .7). Then, given an x N ∈ D N for each N ≥ 1, under P x N , where ζ D is the measure on the right of (5.30) and ν λ is the law of φ + αλa, for φ the pinned DGFF; i.e., a centered Gaussian process on Z 2 with covariances (2.32).
The proof will rely heavily on the arguments and notation from Sections 5-7. Throughout, we fix a sequence {b N } N≥1 such that b N → ∞ and b N /t 1/4 N → 0. First we condense the ideas underlying Lemmas 5.6, 5.7 and 7.2 into: Lemma 8.2 Given > 0, let t ± N,k be the quantity from (5.36) but with b N replaced by 3b N . Abbreviate Then for each b ∈ R and any choice of x N ∈ D N for each N ≥ 1, Proof. The tightness of T N and H /|D N | allows us to effectively truncate the union in (8.2) to −M ≤ k ≤ M and assume H ≤ m deg(D N ). Recall the event F N (x) from (7.2) and note that on the event we have (8.5) at all but at most two x ∈ D N . Next set E + N := E N (t N + 1) and E − N := E N (t N − m − 1), where E N (t N ) is the event E N from (5.31) but for {t N } replaced by {t N }. Recall the notation (t N ) • for the quantity from (5.4). On θ −1 we then get an analogue of (5.40) of the form once N is sufficiently large (independent of k). Consequently, the inequalities apply on the same event as well. Lemma 7.2 shows that (8.8) holds at all but two x ∈ D N with P x N -probability tending to one as N → ∞. This proves the claim.
Lemma 8.2 eliminates the need to consider other starting points than . Next comes the main issue to be dealt with in the proof of Proposition 8.1: Since we are after differences of the local time, we cannot rely on monotonicity as we did earlier; instead we have to estimate the variation of t → L D N t over time intervals of length of order √ 2t N . This is the content of: Proof. The proof is based on tail estimates for the local time which will depend, somewhat sensitively, on a choice of a few parameters. Given δ > 0 let 0 > 0 and j 0 ∈ N be such that ( (8.10) and that, for all integers j ≥ j 0 , These choices can be made because (θ + λ) 2 − θ 2 > λ 2 and λ √ θ+λ < 1. Assume ∈ (0, 0 ] and abbreviate t N := t N − √ 2t N and a N := a N + b log N. Set M to the least integer such Using the Markov property of t → L D N t (x), the probability in (8.9) is bounded by We now use [1, Lemma 4.1] to bound the individual probabilities on the right-hand side as follows. First, noting that by our choice of M, grows proportionally to log N as N → ∞, [1, Lemma 4.1] may be used for the choices a := a N − j 0 √ 2t N , t := t N and b := 0. Noting that W N defined using a N − j 0 √ 2t N and t N instead of a N and t N is comparable with W N , the uniform upper bound on G D N (x, x) then bounds the very first probability in (8.12) by a quantity of order W N /N 2 . The Markov inequality shows and so the first term in (8.12) is order W N /N 2 (with a constant that depends on j 0 ). Next we move to the terms under the sum in (8.12). Here we use [1, Lemma 4.1] for the choices a := a N , t := t N and b := −j √ 2t N to get, for all j = j 0 , . . . , M + 1, for a constant c 3 ∈ (0, ∞) independent of, and o(1) → 0 uniformly in, N ≥ 1 and x ∈ D N . Using the definition of M, the right hand side of (8.17) is order (1) which is o(W N /N 2 ) by W N = N 2(1−λ 2 )+o(1) and (8.10), uniformly in x ∈ D N . The claim follows by taking N → ∞, followed by ↓ 0 and j 0 → ∞.
We are ready to give: depends only on coordinates {φ z : z ∈ Λ r (0)} for some r > 0 and vanishes unless |h| ≤ b and max z∈Λ r (0) |φ z | ≤ b, for some b > 0. Given > 0, let k ∈ Z be such that |T N • θ H − k | < . Pick x ∈ D N and abbreviate Introducing the oscillation of f by is bounded in absolute value by the sum over z ∈ Λ r (x) of three terms: To simplify estimates, introduce the events Summarizing these estimates, and writing ζ D,loc N (t N ) for the measure in (2.31) except with L D N replaced by L D N and t N by t N , we thus get that, on Using Lemmas 8.2, 8.3 and 6.3, the first term on the right tends to zero in P x N -probability as N → ∞ and ↓ 0 for each δ > 0. The tightness of ζ D N measures (under P ) along with the uniform continuity of f ensure that the second term tends to zero in P x N -probability as N → ∞ and δ ↓ 0.
To finish the proof, note that by [1, Theorem 2.6] and the argument underlying Proposition 4.3 we have, under P , : N ≥ 1} and so we may consider subsequential distributional limits ζ D,loc of the latter. Using Proposition 8.1 in the argument from the proof of Theorem 2.3 we conclude that every such subsequential weak limit obeys .

Lemma 8.4
For each x, y ∈ Z 2 , let C(x, y) := a(x) + a(y) − a(x − y) − Then C is symmetric and positive semidefinite and so there exists a centered Gaussian process { φ x : x ∈ Z 2 } with covariance C. This process then satisfies (2.35).
Moving to the thin points, here we go directly for: Proof of Theorem 2.7, thin points. The proof is considerably simpler because, as a few times earlier, certain key inequalities go in a more favorable direction. Following the argument and the notation from the proof for the thick points, we derive an analogue of (8.26) with the events G N (x) and H N (x) replaced by respectively, and 1 [−2b,∞) replaced by 1 (−∞,2b] . The P x N -probability of event G N (x) is controlled using Lemma 6.5. Unlike H N (x) which required a non-trivial decomposition in the proof of Lemma 8.3, the two events constituting H N (x) can be directly separated using the Markov property of t → L D N t . The expected sum over 1 H N (x) • θ H is then shown to be order W N by (8.14) and the fact that E ζ D N ( t − N,k ), 1 (−∞,2b] is bounded in N ≥ 1. As a consequence, we get that, under P x N , where ζ D is the measure on the right of (2.21) without the term e −α 2 λ 2 /16 and ν λ is the law of φ − αλa. The rest of the argument for the thick points may be followed literally.

Avoided points.
The proof is a variation on the themes encountered in the proof of convergence of the measure associated with the light and avoided points. In particular, since the local time vanishes at the avoided points, we will be able to use monotonicity arguments. The following observation will be useful: Lemma 8.6 Let µ be a probability measure on N Z 2 with samples denoted by {n z : z ∈ Z 2 }. Let {τ j (x) : j ≥ 1, x ∈ Z 2 } be i.i.d. Exponential(1), independent of {n z : z ∈ Z 2 }. Then for any t ∈ (−1, ∞) Z 2 with finite support, where t (z) := log(1 + t(z)).
Proof. This boils down to a calculation of the Laplace transform of Exponential(1).
Proof of Theorem 2.8. We will establish the existence and uniqueness of the law ν RI,dis u as part of the proof of the convergence. Letf ∈ C(D) be non-negative, pick t ∈ (0, ∞) Z 2 with finite support and consider the test function f t (x, φ) :=f (x)e − t,φ (8.52) where, abusing notation as before, ·, · denotes the canonical inner product in 2 (Z 2 ).
The function x, h, φ → e −hn f t (x, φ) is non-increasing in both h and the coordinates of φ and so, thanks to Lemma 8.
where κ D is the law on the right-hand side of (5.59).
Next we observe that, by Lemma 8.6 and the fact that 4L D N t n (x) is a natural, jointly for all t ∈ (0, ∞) Z 2 with finite support and allf ∈ C(D). Since ν RI θ is non-random, this is readily turned into the a.s. identity where ν RI,dis θ is a measure as described in the statement. This shows that a measure ν RI,dis u exists with the stated properties for all u ∈ (0, 1). Since adding independent samples from this measure for parameters u ∈ (0, 1) and v ∈ (0, 1) gives us a sample from the measure for parameter u + v, the existence extends to all u > 0. The measure is unique by Lemma 8.6 and so is thus the distributional limit κ D,loc . This completes the proof.