Translator Disclaimer
June 2012 A self-normalized central limit theorem for Markov random walks
Cheng-Der Fuh, Tian-Xiao Pang
Author Affiliations +
Adv. in Appl. Probab. 44(2): 452-478 (June 2012). DOI: 10.1239/aap/1339878720


Motivated by the study of the asymptotic normality of the least-squares estimator in the (autoregressive) AR(1) model under possibly infinite variance, in this paper we investigate a self-normalized central limit theorem for Markov random walks. That is, let {Xn, n ≥ 0} be a Markov chain on a general state space X with transition probability P and invariant measure π. Suppose that an additive component Sn takes values on the real line R, and is adjoined to the chain such that {Sn, n ≥ 1} is a Markov random walk. Assume that Sn = ∑k=1nξk, and that {ξn, n ≥ 1} is a nondegenerate and stationary sequence under π that belongs to the domain of attraction of the normal law with zero mean and possibly infinite variance. By making use of an asymptotic variance formula of Sn / √n, we prove a self-normalized central limit theorem for Sn under some regularity conditions. An essential idea in our proof is to bound the covariance of the Markov random walk via a sequence of weight functions, which plays a crucial role in determining the moment condition and dependence structure of the Markov random walk. As illustrations, we apply our results to the finite-state Markov chain, the AR(1) model, and the linear state space model.


Download Citation

Cheng-Der Fuh. Tian-Xiao Pang. "A self-normalized central limit theorem for Markov random walks." Adv. in Appl. Probab. 44 (2) 452 - 478, June 2012.


Published: June 2012
First available in Project Euclid: 16 June 2012

zbMATH: 1251.60020
MathSciNet: MR2977404
Digital Object Identifier: 10.1239/aap/1339878720

Primary: 60F05
Secondary: 60J25

Rights: Copyright © 2012 Applied Probability Trust


This article is only available to subscribers.
It is not available for individual sale.

Vol.44 • No. 2 • June 2012
Back to Top