The Annals of Mathematical Statistics

Saddlepoint Approximations in Statistics

H. E. Daniels

Full-text: Open access

Abstract

It is often required to approximate to the distribution of some statistic whose exact distribution cannot be conveniently obtained. When the first few moments are known, a common procedure is to fit a law of the Pearson or Edgeworth type having the same moments as far as they are given. Both these methods are often satisfactory in practice, but have the drawback that errors in the "tail" regions of the distribution are sometimes comparable with the frequencies themselves. The Edgeworth approximation in particular notoriously can assume negative values in such regions. The characteristic function of the statistic may be known, and the difficulty is then the analytical one of inverting a Fourier transform explicitly. In this paper we show that for a statistic such as the mean of a sample of size $n$, or the ratio of two such means, a satisfactory approximation to its probability density, when it exists, can be obtained nearly always by the method of steepest descents. This gives an asymptotic expansion in powers of $n^{-1}$ whose dominant term, called the saddlepoint approximation, has a number of desirable features. The error incurred by its use is $O(n^{-1})$ as against the more usual $O(n^{-1/2})$ associated with the normal approximation. Moreover it is shown that in an important class of cases the relative error of the approximation is uniformly $O(n^{-1})$ over the whole admissible range of the variable. The method of steepest descents was first used systematically by Debye for Bessel functions of large order (Watson [17]) and was introduced by Darwin and Fowler (Fowler [9]) into statistical mechanics, where it has remained an indispensable tool. Apart from the work of Jeffreys [12] and occasional isolated applications by other writers (e.g. Cox [2]), the technique has been largely ignored by writers on statistical theory. In the present paper, distributions having probability densities are discussed first, the saddlepoint approximation and its associated asymptotic expansion being obtained for the probability density of the mean $\bar{x}$ of a sample of $n$. It is shown how the steepest descents technique is related to an alternative method used by Khinchin [14] and, in a slightly different context, by Cramer [5]. General conditions are established under which the relative error of the saddlepoint approximation is $O(n^{-1})$ uniformly for all admissible $\bar{x}$, with a corresponding result for the asymptotic expansion. The case of discrete variables is briefly discussed, and finally the method is used for approximating to the distribution of ratios.

Article information

Source
Ann. Math. Statist. Volume 25, Number 4 (1954), 631-650.

Dates
First available in Project Euclid: 28 April 2007

Permanent link to this document
http://projecteuclid.org/euclid.aoms/1177728652

JSTOR
links.jstor.org

Digital Object Identifier
doi:10.1214/aoms/1177728652

Mathematical Reviews number (MathSciNet)
MR66602

Zentralblatt MATH identifier
0058.35404

Citation

Daniels, H. E. Saddlepoint Approximations in Statistics. Ann. Math. Statist. 25 (1954), no. 4, 631--650. doi:10.1214/aoms/1177728652. http://projecteuclid.org/euclid.aoms/1177728652.


Export citation