Open Access
2007 Pseudo-maximization and self-normalized processes
Victor H. de la Peña, Michael J. Klass, Tze Leung Lai
Probab. Surveys 4: 172-192 (2007). DOI: 10.1214/07-PS119

Abstract

Self-normalized processes are basic to many probabilistic and statistical studies. They arise naturally in the the study of stochastic integrals, martingale inequalities and limit theorems, likelihood-based methods in hypothesis testing and parameter estimation, and Studentized pivots and bootstrap-t methods for confidence intervals. In contrast to standard normalization, large values of the observations play a lesser role as they appear both in the numerator and its self-normalized denominator, thereby making the process scale invariant and contributing to its robustness. Herein we survey a number of results for self-normalized processes in the case of dependent variables and describe a key method called “pseudo-maximization” that has been used to derive these results. In the multivariate case, self-normalization consists of multiplying by the inverse of a positive definite matrix (instead of dividing by a positive random variable as in the scalar case) and is ubiquitous in statistical applications, examples of which are given.

Citation

Download Citation

Victor H. de la Peña. Michael J. Klass. Tze Leung Lai. "Pseudo-maximization and self-normalized processes." Probab. Surveys 4 172 - 192, 2007. https://doi.org/10.1214/07-PS119

Information

Published: 2007
First available in Project Euclid: 11 October 2007

zbMATH: 1189.60057
MathSciNet: MR2368950
Digital Object Identifier: 10.1214/07-PS119

Subjects:
Primary: 60K35 , 60K35
Secondary: 60K35

Keywords: LIL , method of mixtures , moment and exponential inequalities , self-normalization

Rights: Copyright © 2007 The Institute of Mathematical Statistics and the Bernoulli Society

Vol.4 • 2007
Back to Top