Solutions to kinetic-type evolution equations: beyond the boundary case

We study the asymptotic behavior as $t \to \infty$ of a time-dependent family $(\mu_t)_{t \geq 0}$ of probability measures on ${\mathbb R}$ solving the kinetic-type evolution equation $\partial_t \mu_t + \mu_t = Q(\mu_t)$ where $Q$ is a smoothing transformation on ${\mathbb R}$. This problem has been investigated earlier, e.g.\ by Bassetti and Ladelli [\emph{Ann. Appl. Probab.} 22(5): 1928--1961, 2012] and Bogus, Buraczewski and Marynych [To appear in \emph{Stochastic Process. Appl.}]. Combining the refined analysis of the latter paper providing a probabilistic description of the solution $\mu_t$ as the law of a suitable random sum related to a continuous-time branching random walk at time $t$ with recent advances in the analysis of the extremal positions in the branching random walk we are able to solve the remaining case that has not been addressed before.


Introduction
Given an integer N ≥ 2 and a positive random vector A = (A 1 , . . . , A N ) we consider the kinetic-type evolution equation for a time-dependent family (µ t ) t≥0 of probability measures on (R, B(R)) where (1.1) has to be understood in the weak sense and Q is the smoothing transformation associated with A. More precisely, with M 1 (R) denoting the set of probability measures on (R, B(R)), where L(Y ) denotes the law of a random variable Y and X 1 , . . . , X N are i.i.d. and independent of A with X 1 ∼ µ. On the level of Fourier transformations, (1.1) corresponds to the Cauchy problem where the boundary condition φ 0 is the Fourier transform of a given µ 0 ∈ M 1 (R) and Q is a self-map of the set of characteristic functions of probability measures on (R, B(R)) defined by for φ being the Fourier transform of some probability measure µ ∈ M 1 (R). Equation (1.1) with N = 2 and A = (sin U, cos U ), U being uniformly distributed on [0, 2π), was considered by Kac [11] as a model describing the behavior of a particle in a homogeneous gas environment, where particles collide at random times.
It is known as the 1-dimensional Kac caricature. In subsequent works, the model was extended in various directions. For instance, there is the 1-dimensional dissipative Maxwell model for colliding molecules. Further, there are models reflecting economical dynamics; we refer the reader to [1,2,3,4,8] for examples and a more comprehensive account to the literature.
It is known (see e.g. [9, Proposition 2.5] for a proof) that given an initial law µ 0 , Eq. (1.1) has a unique solution, which we shall denote by (µ t ) t≥0 henceforth. The corresponding family of Fourier transforms will be denoted by (φ t ) t≥0 . The purpose of this note is to demonstrate how, in a regime that has not been addressed before, the asymptotic behavior of µ t (in the weak sense) as t → ∞ (equivalently, the asymptotic behavior of φ t as t → ∞) can be derived from recent progress on kinetic-type equations [9] and on the extrema of branching random walks [10,13].
Assumption (A3) is a non-void assumption as it is possible that ϑ 0 is the unique minimizer. Notice that F (θ) equals the tangent of the angle between the line segment joining (0, 0) and (θ, Φ(θ)) and the positive horizontal half-axis. Hence, by the strict convexity of Φ, the minimizer is unique, and throughout the paper, we shall use the symbol ϑ to denote it.
In the case where µ 0 is in the domain of normal attraction of a γ-stable law (and if µ 0 is additionally centered when γ > 1), the asymptotic behavior of µ t as t → ∞ was found in [1] for γ < ϑ and in [9] in the boundary case γ = ϑ. In this note, we address the remaining case γ > ϑ or, more generally, the case where for some p > ϑ. If ϑ ≥ 2, then µ 0 has a finite second moment and the asymptotic behavior of µ t can be deduced from [1] if ϑ > 2 and from [9] if ϑ = 2. Therefore, we shall from now on suppose that (1.4) holds and that ϑ < 2. In the remainder of the paper we assume that 0 < ϑ < p ≤ 2.
The following theorem is the main result of the paper.  exists a probability measure µ ∞ on the Borel sets of R, not concentrated in a single point, such that More information about the limit law µ ∞ can be extracted from Proposition 3.1, namely, µ ∞ is the law of the random variable Z appearing in the statement of the proposition.

Representation of solutions: the branching random walk connection.
Given an initial distribution µ 0 and the random vector A, we give a representation of µ t as the law of a continuous-time branching random walk at time t. The exact form of representation was developed in [9], see also [1] and the references therein for earlier results.
Throughout the paper, we work on a fixed probability space (Ω, F , P) on which two independent families (A(u), E(u)) u∈I and (X u ) u∈I of random vectors and random variables, respectively, are defined such that • the (A(u), E(u)), u ∈ I are independent and identically distributed (i.i.d.) copies of (A, E) where A is given and E is an independent unit-mean exponential random variable; For convenience, we denote quantities related to the ancestor without the label ∅, i.e., (A, E) = (A(∅), E(∅)) etc.
We now recursively define a continuous-time Markov branching process (Y t ) t≥0 starting with one particle, the ancestor, denoted by ∅, at time t = 0. The birthtime of the ancestor is σ(∅) = 0. If a particle labelled u ∈ I is born at time σ(u), it lives an exponential lifetime E(u) until σ(u) + E(u) at which time it dies and simultaneously gives birth to N new particles labelled u1, . . . , uN . We write I t := {u ∈ I : σ(u) ≤ t < σ(u)+E(u)} for the set of labels pertaining to individuals alive at time t. We further set Y t := |I t | to be the number of individuals alive at time t ≥ 0. Because (Y t ) t≥0 is a Markov branching process with degenerate reproduction law, we have E[Y t ] < ∞ for all t ≥ 0. For a particle u = u 1 . . . u m ∈ I, we write for its position on the real line. Finally, we write for the continuous-time branching random walk at time t ≥ 0. The Laplace transform at θ ≥ 0 of the intensity measure of Z t is given by The validity of (1.5) is known in the theory of continuous-time branching processes, see [5, Equation (5.1)] (for the reader convenience, a sketch of the proof is included in the appendix, see Proposition A.1).
The connection between the continuous-time branching random walk Z t and the kinetic-type evolution equation (1.1) is established in the following proposition.
The above result provides an explicit form of the solution to Equation (1.1). Therefore, in order to prove our main result we need to find an appropriate scaling of the random sum leading to a nontrivial limit law as t → ∞. For this purpose, first applying the Croft-Kingman lemma [12], we reduce the problem of describing convergence along any sequence to convergence along arbitrary lattice sequence (Section 2). Finally, we show the existence of the limit along lattice sequences (Section 3).

Reduction to the lattice case
The goal of this section is to prove the following lemma.
Lemma 2.1. Suppose that the assumptions of Theorem 1.1 hold and that, for any fixed δ > 0, for some non-degenerate random variable Y δ . Then The lemma is proved in two steps.
Then the family is a deterministic nonnegative continuous function, then also (a(t)U t ) t≥0 is continuous in L r .
Proof. Define g(t, s) := E[|U t − U s | r ] for s, t ≥ 0. We first show continuity at 0, that is, g(t, 0) → 0 as t → 0. Notice that U 0 = X ∅ = X ∼ µ 0 . Then, with S t := {I t = {∅}} denoting the event that there was a split in the interval [0, t], we have P(S t ) = 1 − e −t . Consequently, by sub-additivity, The second summand on the right-hand side vanishes as t → 0, so it remains to consider the first one. By the Hölder inequality, for c > 1 such that cr < p ∧ 1 The last factor tends to 0 as t → 0, so it suffices to justify that E[|U t | cr ] remains bounded as t → 0. This is true since, by sub-additivity and (1.5), which is clearly bounded as a function of t. Now let s, t ≥ 0. By conditioning with respect to F t∧s , the σ-algebra containing all information up to and including time t ∧ s, and using the Markov property, we infer  We now turn to the proof of the continuity of h and pick some 0 < r < p ∧ 1. For any x, y ∈ R, we have The latter expression tends to 0 as s → t by Lemma 2.2.

Convergence along lattices
Throughout the whole Section 3, we fix some δ > 0 and prove that (2.1) holds for a non-degenerate random variable Y δ .
3.1. Properties of the skeleton branching random walk. The sequence of point processes (Z nδ ) n∈N0 forms a discrete-time (or skeleton) branching random walk, in which each individual produces offspring with displacement relative to its position given by the points of an independent copy of the point process Z δ . In this section, we shall discuss the properties of this branching random walk that are relevant to us. As δ is kept fixed throughout Section 3, we abbreviate m(δ, θ), defined in (1.5), by m(θ). Thus, Further, recall that ϑ is the unique minimizer of Φ(θ)/θ on int D φ , and hence Φ ′ (ϑ)ϑ = Φ(ϑ). This implies ϑm ′ (ϑ)/m(ϑ) = log m(ϑ). For n ∈ N 0 and u ∈ I nδ , we define V (u) := ϑS(u) + n log m(ϑ) = ϑS(u) + nδΦ(ϑ).

(3.2)
By definition i.e., the branching random walk ( u∈I nδ δ V (u) ) n∈N0 is in the boundary case. 1 Moreover, it is non-lattice by (A1) and satisfies The latter follows from Lemma 3.6 in [9], but can also be directly inferred as a simple consequence of the fact that ϑ ∈ int D Φ . Moreover, from Lemma 3.9 in [9], we infer the existence of some δ > 0 such that where x ± := max(±x, 0) for x ∈ R. Notice that (3.5) comfortably implies We have now checked that the assumptions of [13, Theorem 1.1] hold 2 and infer, with V n (u) : s. Here, the convergence in distribution is in the space of locally finite point measures equipped with the topology of vague convergence. For k ∈ N, define P k := inf{t ∈ R : for every β > 1. Suppose that (X k ) u∈N is an sequence of independent copies of X ∼ µ 0 independent of Z • ∞ . We consider the following random sums Our main result, Theorem 1.1, follows directly from Lemma 2.1 and the following proposition.
The bulk of the proof of this proposition can be adopted from the proof of Theorem 2.5 in [10], however at some points changes are needed. In what follows, we repeat the major steps of the proof of the cited theorem adjusted to the situation here and point out the changes that are required.
Proof. Recall that if ϑ ≥ 1, then we assume E[X] = 0 and ϑ < p ≤ 2. In this case the lemma follows from (the proof of) Lemma 5.3 in [10]. If ϑ < 1, we do not assume E[X] = 0, indeed, E[X] need not even be defined. However, we may assume without loss of generality that ϑ < p ≤ 1 in this case. Then, in order to prove the assertion, one can still use the proof of [10,Lemma 5.3] except at one point in the proof where the von Bahr-Esseen inequality is used (Lemma A.1 in the cited reference). The use of the latter inequality has to be replaced by an application of the sub-additivity of the function x → x p , x ≥ 0. The rest of the proof carries over without changes.
We are now ready to prove Proposition 3.1.
Proof of Proposition 3.1. Recall that 0 < ϑ < p ≤ 2. Additionally, we assume without loss of generality that p ≤ 1 if ϑ < 1. Let β 0 := p ϑ > 1. Given Z * ∞ , for each n ∈ N, the random variable Z n is a sum of independent random variables (centered in the case ϑ ≥ 1). If ϑ ≥ 1, we may argue as in the proof of Theorem 2.5 in [10] and apply the von Bahr-Esseen inequality to infer and the latter sum is almost surely finite by (3.7). This shows that (Z n ) n∈N0 , conditionally given Z * ∞ , is an L p -bounded martingale. We conclude that Z n converges a. s. conditionally given Z * ∞ , hence, also unconditionally thereby proving the first part of Proposition 3.1 in the case ϑ ≥ 1. Now let ϑ < 1 and fix some ε > 0. By (3.7) there exists an M > 0 such that Then, for any δ > 0 and any n, m ∈ N with m ≤ n, from the sub-additivity of x → |x| p we get e −β0P k and the second term converges to zero as m, n → ∞ by the dominated convergence theorem. Hence (Z n ) n∈N0 forms a Cauchy sequence in probability and thus converges in probability. In both cases, ϑ < 1 and ϑ ≥ 1, we denote the limit of the sequence (Z n ) n∈N0 by Z.
The proof of the second part is based on the decomposition (m(ϑ)) − n ϑ n 3 2ϑ u∈I nδ The remainder of the proof is based on an application of Theorem 4.2 in [7]. In view of Lemma 3.3, the cited theorem gives the assertion once we have shown the following two assertions: The first assertion is a consequence of Lemma 3.2. Indeed, the function on R 2 that maps (x, y) to e − 1 ϑ x f K (x)y is continuous and vanishes for all sufficiently large x.
Therefore, Lemma 3.2 yields The second assertion can be proved similarly as in the proof of Theorem 2.5 in [10]. More precisely, it follows from the dominated convergence theorem once we have proved that P(|Z − Z * K | > ε | Z ∞ ) → 0 a. s. as K → ∞ for every ε > 0. Now fix ε > 0 and observe that , where Fatou's lemma gives the first inequality and Markov's inequality the second. Now given a realization P 1 ≤ P 2 ≤ . . . of the point process Z ∞ , we can choose n ∈ N such that P n > K + 1. Then where we have used the complex von Bahr-Esseen inequality, see [10, Lemma A.1], in the case 1 ≤ ϑ < p ≤ 2 and sub-additivity of the function x → |x| p in the case 0 < ϑ < p ≤ 1. (The factor 4 is not required in the second case.) In any case, n k=1 e − p ϑ P k (1 − f K (P k )) p E[|X k | p ] ≤ E[|X 1 | p ] k≥1:P k >K e − p ϑ P k → 0 a. s. as K → ∞ by (3.7) since p ϑ > 1.