A statistical procedure is called robust, if its performance is insensitive to small deviations of the actual situation from the idealized theoretical model. In particular, a robust procedure should be insensitive to the presence of a few "bad" observations; that is, a small minority of the observations should never be able to override the evidence of the majority. (But at the same time the discordant minority might be a prime source of information for improving the theoretical model!) The classical probability ratio test is not robust in this sense: a single factor $p_1(x_j)/p_0(x_j)$ equal (or almost equal) to 0 or $\infty$ may upset the test statistic $T(x) = \prod^n_1 p_1(x_j)/p_0(x_j)$. This leads to the conjecture that appropriate robust substitutes to both fixed sample size and sequential probability ratio tests might be obtained by censoring the single factors at some fixed numbers $c' < c''$. Thus, one would replace the test statistic by $T'(x) = \prod^n_1 \pi(x_j)$, where $\pi(x_j) = \max (c', \min (c'', p_1(x_j)/p_0(x_j)))$. The problem of robustly testing a simple hypothesis $P_0$ against a simple alternative $P_1$ may be formalized by assuming that the true underlying distribution lies in some neighborhood of either of the idealized model distributions $P_0$ or $P_1$. The present paper exhibits two different types of such neighborhoods for which the above mentioned test, to be called censored probability ratio test, is most robust in a well defined minimax sense. The problem solved here originated through the earlier paper Huber (1964), over the question how to test hypotheses about the mean of contaminated normal distributions.
"A Robust Version of the Probability Ratio Test." Ann. Math. Statist. 36 (6) 1753 - 1758, December, 1965. https://doi.org/10.1214/aoms/1177699803