## Abstract

Let $(\mathscr{X}, \mathscr{A})$ be a measurable space and let $\Theta$ be an open subset of the $k$-dimensional Euclidean space $\mathscr{E}_k$. For each $\theta\in\Theta$, let $P_\theta$ be a probability measure on $\mathscr{A}$. Let $\{ X_n, n \geqq 0\}$ be a discrete parameter Markov process defined on $(\mathscr{X}, \mathscr{A}, P_\theta), X_n$ taking values in the Borel real line $(R, \mathscr{B})$. Finally, let $\mathscr{A}_n$ be the $\sigma$-field induced by the first $n + 1$ random variables $X_0, X_1,\cdots, X_n$ from the process and let $P_{n,\theta}$ be the restriction of $P_\theta$ to the $\sigma$-field $\mathscr{A}_n$. Under suitable conditions on the process, the following results are derived. Let $\theta_0$ be an arbitrary but fixed point in $\Theta$ and let $\Delta_n(\theta_0)$ be a $k$-dimensional vector defined in terms of the random variables $X_0, X_1,\cdots, X_n; \Delta_n^\ast(\theta_0)$ stands for a certain truncated version of $\Delta_n(\theta_0)$. By means of $\Delta_n^\ast(\theta_0)$ and $h\in\mathscr{E}_k$, one defines a probability measure $R_{n,h}, n \geqq 0$. The first main result is that the sequences $\{P_{n,\theta}\}$ and $\{R_{n,h}\}$ of probability measures with $h = n^{\frac{1}{2}}(\theta - \theta_0), \theta\in\Theta$, are differentially equivalent at the point $\theta_0$. (See Definition 5.1.) This is shown in Corollary 5.1. It is also proved in Corollary 5.2 that the sequence $\{\Delta_n^\ast(\theta_0)\}$ is differentially sufficient at $\theta_0$ (see Definition 5.2) for the family $\{P_{n,\theta}; \theta\in\Theta\}$ of probability measures. Next, let $\{h_n\}$ be a bounded sequence of $h$'s in $\mathscr{E}_k$ and set $\theta_n = \theta_0 + h_nn^{-\frac{1}{2}}$. Then for hypotheses testing problems, Theorem 6.1 allows one to restrict oneself to the class of tests depending on $\Delta_n(\theta_0)$ alone, at least as far as the asymptotic power of the test under alternatives of the form $P_{n,\theta_n}$ is concerned. In Section 7, these results are applied to the case of testing hypotheses about a real-valued parameter. More specifically, asymptotically most powerful tests for testing the hypothesis $\theta = \theta_0$ against one-sided alternatives are constructed. This is covered in Theorem 7.1.1. Also an asymptotically most powerful unbiased test for testing the same hypothesis as above against two-sided alternatives is constructed in Theorem 7.1.2. The first of these problems was also dealt with in Johnson and Roussas [2] but the approach is different here. The second problem is solved in Wald [8] for the independent identically distributed case. However, both the assumptions and approach are different here in addition to the Markovian character of the random variables involved. Section 6 treats the general situation where $\Theta$ is an open subset of $\mathscr{E}_k$. Theorem 6.1 together with Theorem 6.3 provide a way for studying the corresponding hypothesis testing problem in the $k$-dimensional parameter case. Finally, at the end of the last section, an outline is presented of forthcoming results for that case. These results extend, under substantially weaker conditions, the work of Wald [8], [9] to Markov processes. The method of proof relies heavily on the development in LeCam [3]. Unless otherwise stated, limits will be taken as the sequence $\{ n\}$, or subsequences thereof, tends to $\infty$. Integrals without limits will extend over the entire appropriate space. For $h\in\mathscr{E}_k, h'$ stands for its transpose. All bounding constants will be finite numbers.

## Citation

Richard A. Johnson. George G. Roussas. "Asymptotically Optimal Tests in Markov Processes." Ann. Math. Statist. 41 (3) 918 - 938, June, 1970. https://doi.org/10.1214/aoms/1177696969

## Information