## Abstract

Motivated by ecological and genetic phenomena, Karlin and McGregor [3] introduced the following model to describe the continued formation and growth of mutant biological populations. It is assumed that a new mutant population arises at each event time of a stochastic process (referred to as the input process) $\{v(t),t > 0\}$ whose state space is the non-negative integers. Each new mutant population begins its evolution with a fixed number of elements and evolves according to the laws of a continuous time Markov chain $\mathscr{P}$ with stationary transition probability function $P_{i,j}(t)\quad i,j = 0, 1, 2, \cdots; t \geqq 0.$ We assume that all populations evolve according to the same Markov Chain and independent of one another. In terms of this structure, the basic question which we consider in this work can be formulated in the following manner: (A) Given an input process $\{v(t),t > 0\}$ and the individual growing process $\mathscr{P}$, determine the asymptotic behavior as $t \rightarrow \infty$ of the mean and variance of $S(t) = \{$number of different sizes of mutant populations at time $t\}$ and determine the limit distribution as $t \rightarrow \infty$ of $S(t)$ appropriately normalized. $S(t)$ is a special functional of the vector process $N(t) = \{N_0(t), N_1(t), N_2(t), \cdots\}\quad t > 0$ where $N_k(t) = \text{number of mutant populations with exactly} k \text{members at time} t$ and may be interpreted as a measure of the fluctuations in population size. We have restricted our considerations to this special case because it serves as a model problem for more general situations and possesses all the subtle difficulties of the general case. The random variable $S(t)$ can also be identified as the number of distinct occupied states at time $t$ among all Markov Chains which have begun their evolution up to that time. In the subsequent discussion we will refer to $\{S(t), t > 0\}$ as the "occupied states" process generated by the input process $\{v(t),t > 0\}$ and the Markov Chain $\mathscr{P}$. Without loss of generality we identify the state 0 as the initial state of all evolving Markov Chains and $-1$ as an absorbing state if absorption is possible. In this paper we introduce "occupied states" processes generated by a class of null recurrent, transient, and absorbing barrier Birth and Death processes and a Poisson input process. The special feature of this class is that with the normalization $Y(t;u) = t^{-1} X(t^\alpha u),\quad\alpha > 0$ $(\{X(t), t > 0\}$ is the growing process $\mathscr{P}$), the process $Y(t;u)$ converges weakly in the Markov sense as $t \rightarrow \infty$ to a Bessel diffusion (see C. Stone [8]). The main idea (also applicable to more general growing processes $\mathscr{P}$ is that one requires local limit theorems, and under some circumstances, specification of the rate of convergence of the transition density of $Y(t;u)$ to the density of the limiting diffusion in order to prescribe exact asymptotic formulas for $ES(t)$ and $\operatorname{Var} S(t)$ and to prove a central limit theorem for $S(t)$. The results of this paper are in sharp contrast with the asymptotic formulas for $ES(t)$ and $\operatorname{Var} S(t)$ which appear in the companion paper [6], where the growing process $\mathscr{P}$ is a general positive recurrent Markov chain and the input process remains Poisson. Section 2 contains basic definitions, some intuitive discussion, and precise statements of the main results on asymptotic behavior of $ES(t)$ and $\operatorname{Var} S(t)$. In Section 3, we present detailed proofs of the theorems of Section 2, and we conclude with a central limit theorem for $S(t)$ in Section 4. The appendix contains some technical lemmas which are essential for asymptotic formulas that incorporate speed of convergence theorems.

## Citation

Burton Singer. "Some Asymptotic Results in a Model of Population Growth." Ann. Math. Statist. 41 (1) 115 - 132, February, 1970. https://doi.org/10.1214/aoms/1177697193

## Information