Bayesian Analysis

Testing Un-Separated Hypotheses by Estimating a Distance

Jean-Bernard Salomond

Abstract

In this paper we propose a Bayesian answer to testing problems when the hypotheses are not well separated. The idea of the method is to study the posterior distribution of a discrepancy measure between the parameter and the model we want to test for. This is shown to be equivalent to a modification of the testing loss. An advantage of this approach is that it can easily be adapted to complex hypotheses testing which are in general difficult to test for. Asymptotic properties of the test can be derived from the asymptotic behaviour of the posterior distribution of the discrepancy measure, and gives insight on possible calibrations. In addition one can derive separation rates for testing, which ensure the asymptotic frequentist optimality of our procedures.

Article information

Source
Bayesian Anal. (2017), 24 pages.

Dates
First available in Project Euclid: 23 June 2017

https://projecteuclid.org/euclid.ba/1498204951

Digital Object Identifier
doi:10.1214/17-BA1059

Citation

Salomond, Jean-Bernard. Testing Un-Separated Hypotheses by Estimating a Distance. Bayesian Anal., advance publication, 23 June 2017. doi:10.1214/17-BA1059. https://projecteuclid.org/euclid.ba/1498204951

References

• Akakpo, N., Balabdaoui, F., and Durot, C. (2014). “Testing monotonicity via local least concave majorants.” Bernoulli, 20(2): 514–544.
• Baraud, Y., Huet, S., and Laurent, B. (2003). “Adaptive tests of qualitative hypotheses.” ESAIM Probabilités Et Statistique, 7: 147–159.
• Baraud, Y., Huet, S., and Laurent, B. (2005). “Testing convex hypotheses on the mean of a Gaussian vector. Application to testing qualitative hypotheses on a regression function.” The Annals of Statistics, 33(1): 214–257.
• Berger, J. O., Boukai, B., and Wang, Y. (1997). “Unified frequentist and Bayesian testing of a precise hypothesis.” Statistical Science, 12(3): 133–160. With comments by Dennis V. Lindley, Thomas A. Louis and David Hinkley and a rejoinder by the authors.
• Berger, J. O. and Delampady, M. (1987). “Testing precise hypotheses.” Statistical Science, 2(3): 317–352. With comments and a rejoinder by the authors.
• Berger, J. O. and Sellke, T. (1987). “Testing a point null hypothesis: irreconcilability of $P$-values and evidence.” Journal of the American Statistical Association, 82(397): 112–139. With comments and a rejoinder by the authors.
• Bernardo, J. (1980). “A Bayesian analysis of classical hypothesis testing.” Trabajos de Estadistica Y de Investigacion Operativa, 31(1): 605–647. http://dx.doi.org/10.1007/BF02888370
• Bogdan, M., Chakrabarti, A., Frommlet, F., and Ghosh, J. K. (2011). “Asymptotic Bayes-optimality under sparsity of some multiple testing procedures.” Annals of Statistics, 39(3): 1551–1579.
• Bowman, A., Jones, M., and Gijbels, I. (1998). “Testing monotonicity of regression.” Journal of computational and Graphical Statistics, 7(4): 489–500.
• Carvalho, C. M., Polson, N. G., and Scott, J. G. (2010). “The horseshoe estimator for sparse signals.” Biometrika, 97(2): 465–480.
• Castillo, I. and Rousseau, J. (2015). “A General Bernstein–von Mises Theorem in semiparametric models.” The Annals of Statistics. To appear.
• Dass, S. C. and Lee, J. (2004). “A note on the consistency of Bayes factors for testing point null versus non-parametric alternatives.” Journal of statistical planning and inference, 119(1): 143–152.
• Datta, J. and Ghosh, J. K. (2013). “Asymptotic properties of Bayes risk for the horseshoe prior.” Bayesian Analysis, 8(1): 111–131.
• Dunson, D. B. and Peddada, S. D. (2008). “Bayesian nonparametric inference on stochastic ordering.” Biometrika, 95(4): 859–874. http://biomet.oxfordjournals.org/content/95/4/859.abstract
• Erven, T. v., Grünwald, P., and de Rooij, S. (2012). “Catching up faster by switching sooner: a predictive approach to adaptive estimation with an application to the AIC-BIC dilemma.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(3): 361–417.
• Gelman, A. (2008). “Objections to Bayesian statistics.” Bayesian Analasys, 3(3): 445–449.
• Ghosal, S., Ghosh, J. K., and Van Der Vaart, A. W. (2000a). “Convergence rates of posterior distributions.” The Annals of Statistics, 28(2): 500–531.
• Ghosal, S., Sen, A., and van der Vaart, A. W. (2000b). “Testing monotonicity of regression.” The Annals of Statistics, 28(4): 1054–1082.
• Ghosal, S. and van der Vaart, A. (2007). “Convergence rates of posterior distributions for non-i.i.d. observations.” The Annals of Statistics, 35(1): 192–223.
• Holmes, C. and Heard, N. (2003). “Generalized monotonic regression using random change points.” Statistics in Medicine, 22(4): 623–638.
• Ingster, Y. I. (1987). “Asymptotically minimax testing of nonparametric hypotheses.” In Probability theory and mathematical statistics, Vol. I (Vilnius, 1985), 553–574. VNU Sci. Press, Utrecht.
• Ingster, Y. I. and Suslina, I. A. (2003). Nonparametric goodness-of-fit testing under Gaussian models, volume 169 of Lecture Notes in Statistics. Springer-Verlag, New York.
• Jeffreys, H. (1939). Theory of Probability. Oxford University Press, Oxford.
• Johnson, V. E. (2013). “Uniformly most powerful Bayesian tests.” Ann. Statist., 41(4): 1716–1741.
• Johnson, V. E. and Rossell, D. (2010). “On the use of non-local prior densities in Bayesian hypothesis tests.” Journal of the Royal Statistical Society. Series B, Statistical Methodology, 72(2): 143–170.
• Juditsky, A. and Nemirovski, A. (2002). “On Nonparametric Tests of Positivity/ Monotonicity/Convexity.” The Annals of Statistics, 30(2): pp. 498–527. http://www.jstor.org/stable/2699966
• Lepski, O. and Tsybakov, A. B. (2000). “Asymptotically exact nonparametric hypothesis testing in sup-norm and at a fixed point.” Probability Theory and Related Fields, 117(1): 17–48.
• Lepski, O. V. and Pouet, C. F. (2008). “Hypothesis testing under composite functions alternative.” In Topics in stochastic analysis and nonparametric estimation, volume 145 of IMA Vol. Math. Appl., 123–150. Springer, New York.
• Lepski, O. V. and Spokoiny, V. G. (1999). “Minimax Nonparametric Hypothesis Testing: The Case of an Inhomogeneous Alternative.” Bernoulli, 5(2): pp. 333–358. http://www.jstor.org/stable/3318439
• Robert, C. P. (2007). The Bayesian choice. Springer Texts in Statistics. Springer, New York, second edition. From decision-theoretic foundations to computational implementation.
• Rossell, D. and Telesca, D. (2017). “Non-Local Priors for High-Dimensional Estimation.” Journal of the American Statistical Association, 112(517): 1–33.
• Rousseau, J. (2007). “Approximating interval hypothesis: $p$-values and Bayes factors.” In Bayesian statistics 8, Oxford Sci. Publ., 417–452. Oxford: Oxford Univ. Press.
• Rousseau, J. (2010). “Rates of convergence for the posterior distributions of mixtures of betas and adaptive nonparametric estimation of the density.” The Annals of Statistics, 38(1): 146–180.
• Rousseau, J. and Robert, C. (2010). “On moment priors for Bayesian model choice: a discussion.” Bayesian Statistics, 9: 1–2.
• Salomond, J.-B. (2017). “Supplement for “Testing un-separated hypotheses by estimating a distance”.” Bayesian Analysis.
• Scott, J. G., Shively, T. S., and Walker, S. G. (2015). “Nonparametric Bayesian testing for monotonicity.” Biometrika, 102(3): 617–630.
• Tokdar, S. T., Chakrabarti, A., and Ghosh, J. K. (2010). “Bayesian nonparametric goodness of fit tests.” Frontiers of Statistical Decision Making and Bayesian Analysis, M.-H. Chen, DK Dey, P. Mueller, D. Sun, and K. Ye, Eds.
• Verdinelli, I. and Wasserman, L. (1998). “Bayesian goodness-of-fit testing using infinite-dimensional exponential families.” The Annals of Statistics, 26(4): 1215–1241.
• Wang, L. and Dunson, D. B. (2011). “Bayesian isotonic density regression.” Biometrika, 98(3): 537–551.