Testing Un-Separated Hypotheses by Estimating a Distance

In this paper we propose a Bayesian answer to testing problems when the hypotheses are not well separated. The idea of the method is to study the posterior distribution of a discrepancy measure between the parameter and the model we want to test for. This is shown to be equivalent to a modification of the testing loss. An advantage of this approach is that it can easily be adapted to complex hypotheses testing which are in general difficult to test for. Asymptotic properties of the test can be derived from the asymptotic behaviour of the posterior distribution of the discrepancy measure, and gives insight on possible calibrations. In addition one can derive separation rates for testing, which ensure the asymptotic frequentist optimality of our procedures.


Introduction
Bayesian hypothesis testing, although widely studied in the literature, is still subject to controversy (see Jeffreys, 1939;Bernardo, 1980;Berger and Sellke, 1987;Gelman, 2008, to name a few). In particular, a lot of efforts have been puted on reconciling Bayesian and frequentist testing procedures as in Berger and Sellke (1987), Berger et al. (1997) or Berger and Delampady (1987). In this paper, we focus on the specific case of twohypotheses testing although we believe that the ideas developed here are more general; more precisely, we consider testing problems of the form: where M 0 and M 1 are not well separated, i.e.M 0 ∩M 1 = ∅ whereF stands for the closure of F. When considering prediction, it is now well known that standard Bayesian methods such as Bayesian Information Criterion (BIC) have a tendency to favour the simpler model, even when the more complex one gives better predictions, as shown in Erven et al. (2012). This phenomenon also occurs in a testing or model selection setting when hypotheses are nested, and induces a loss of power for the Bayesian test near the null. In our view, one reason for this lack of efficiency of standard Bayesian testing approaches, such as the Bayes Factor or the comparison of posterior probabilities, comes from the fact that parameters that are close to the boundary between both hypotheses can be approximated from both sides. Thus, depending on the prior distribution on both the null and the alternative, some inconsistency may occur. This phenomenon is shown on some examples in Section 2. This loss of power of Bayesian testing procedures, induced by the prior is troublesome as it is difficult to control for and strongly depends on the prior distribution. Finding good prior distributions for testing has been a subject of high interest in the recent years. In particular, Johnson and Rossell (2010) (or the actualised version of their ideas developed in Rossell and Telesca, 2017) consider a similar case of un-separated hypotheses. Their idea is to enforce separation through the prior distribution using non-local priors. As exposed in Rousseau and Robert (2010), this approach can be viewed as a modification of the loss used for testing. It appears on simple examples studied in Section 2 that imposing such a penalty can make it more difficult to detect parameters near the boundary between hypotheses (see Section 2.1 for instance).
The problem of finding a good prior distribution for testing has also been tackled by Johnson (2013), where the author introduced uniformly most powerful Bayesian test.
The author proposes to calibrate the method by maximizing the probability that the Bayes Factor exceeds a certain threshold under the alternative. However, the proposed method seems difficult to extend outside exponential models. In this paper we propose a novel approach to the problem of testing un-separated hypotheses, based on the evaluation of a discrepancy between the parameter θ and the hypothesis at hand. A great advantage of the approach is that it is in general easy to use in practice and it generalizes directly to nonparametric hypotheses testing. Let D(θ, M 0 ) be a discrepancy measure between θ and M 0 . Following the frequentist approach to testing, our idea is to associate θ to M 0 if D(θ, M 0 ) is bellow a certain threshold τ . This idea of choosing the model closer to the parameter for a certain metric is quite general and we believe that it could be applied in a wide variety of settings. In this paper, we might only focus on the simpler problem of two hypotheses testing.
Although not aiming at the same problem, this approach is similar to the idea of approximating precise hypotheses by point null hypotheses as studied in Berger and Delampady (1987), which can be re-interpreted as a use of non-local prior as argued in Johnson and Rossell (2010). This approximation of hypotheses where latter studied in Verdinelli and Wasserman (1998) and Rousseau (2007). More specifically, in the latter the author proposes a generalization of the 0 − 1 loss function from which a Bayesian test is derived, and which induces a separation of the hypotheses. Following Rousseau (2007), we consider the following loss function where the parameters γ 0 and γ 1 have the same interpretation as for the standard weighted 0 − 1 loss (see Robert, 2007) in terms of price of misclassification error. A default choice is to take γ 0 = γ 1 . This modification of the loss function can also be viewed as a relaxation of the hypotheses For a fixed threshold τ , the same idea was applied in Dunson and Peddada (2008) and Wang and Dunson (2011) for testing equality in distribution against stochastic ordering. From a decision theoretic point of view, this loss is relevant since it indicates that we do not pay for misclassified parameters that lie in a region in which we cannot differentiate the null and the alternative. In addition, as argued in Berger and Delampady (1987), one is in general not so interested in knowing if θ belongs to M 0 but rather if θ ∈ M 0 is reasonable approximation. From a more practical point of view, this approach gives a method for constructing Bayesian tests that separate well the hypotheses in a wide variety of contexts, including complex alternatives such as nonparametric models for example. Deriving the Bayesian answer to (3) can also lead to simpler procedures. The Bayesian estimate associated with a prior Π on the parameter set M = M 0 ∪ M 1 , the loss (2) and data Y n , is given by where Π(·|Y n ) denote the posterior measure of the parameter θ given the observations Y n . From this last equation, we see that the behaviour of a test based on our modified 0−1 loss is driven by the behaviour of D(θ, M 0 ). This will prove particularly useful when testing complex or nonparametric versus complex or nonparametric hypothesis which is known to be a difficult case to handle and has not received much attention in the Bayesian literature. In addition even for simpler model, the behaviour of D(θ, M 0 ) may also be easy to study for a wide variety of priors, as shown in Section 3.1 for instance. From this formulation, we see that prior distributions that induce a good behaviour for D(θ, M 0 ) in terms of concentration properties will also be good candidates for testing with this approach. Note that such priors may differ from the one that leads to good properties for estimating θ, as shown for example in Section 3.2. Note also that to compute the Bayesian test with formulation (4), we only have to sample under the posterior. This thus gives leads to tackle two of the main difficulties in Bayesian testing when studying the Bayes Factor: choosing a appropriate prior and computing the marginal distribution.
Once the discrepancy measure is chosen, the remaining problem is calibrating the threshold τ . In an informative context where one has prior knowledge on acceptable discrepancy from M 0 , τ can be calibrated subjectively. However, such a prior knowledge may not be available. We thus propose a calibration of τ based on asymptotic arguments. Heuristically, one would like to find a threshold τ that minimizes the testing error. Johnson (2013) proposed a similar idea for constructing uniformly most powerful Bayesian tests where he proposes to chose a prior for testing that maximizes the probability that the Bayes Factor exceed a certain threshold for all θ ∈ M 1 . In our case, in general, minimizing the testing error might not be possible even for some simple models. We thus propose a calibration method based on the asymptotic control of the type I and type II errors. More precisely we chose τ = τ n to be the smallest sequence such that sup where E n θ denote the expectation with respect to Y n ∼ P θ . Given the formulation of the test (4) finding such a calibration will only requires a control of the asymptotic behaviour of D(θ, M 0 ) under the posterior. We then study for which sequence of ρ n we have sup The sequence ρ n is thus an upper bound on the separation rate of the test (see Lepski and Tsybakov, 2000). Separation rates indicates how close a parameter from M 1 can be to M 0 and sill be detected by the test. Although separation rates have been widely studied in the frequentist literature, to the author's best knowledge, the only related result in the Bayesian literature has been proposed in Rossell and Telesca (2017). Note that if the test separates both hypotheses at the best possible rate in the minimax sense, it indicates that the decision rule δ π n (τ ) although being a Bayesian answer to the relaxed testing problem (3), is also an asymptotically optimal frequentist answer for the original testing problem (1). This indicates that such a test can catch up with frequentist methods for detecting parameters close to the boundary between the hypotheses. This is to the best of our knowledge a new result for Bayesian test. A counterpart will be of course a loss in parsimony enforcement. In the remainder of the paper, we study on two examples the problems that can occurs at or close to the boundary between hypotheses. We then propose a general calibration for τ n for some usual testing problems and show that our method achieve the minimax separation rates in these cases. On the last sections, we compare our approach to existing ones for a non-parametric test.

Boundary problems
In this section we illustrate on simple examples the problems faced by the non-local prior approach to testing proposed by Johnson and Rossell (2010) and further developed in Rossell and Telesca (2017) and the standard priors when the parameter is at, or near the boundary between the null and the alternative.

Point null hypotheses
Consider the following data generating process X n ∼ N (θ, 1/ √ n), and the test H 0 : θ = 0 versus H 1 : θ = 0. To compute the standard Bayes Factor for this problem, define M 0 = {0} and M 1 = R\{0}, and let the prior distribution π on θ be π : θ ∼ N (0, σ 2 ), and chose equal prior weights on both hypotheses. We can easily derive the usual Bayes-Factor for this problem and get and compare it to 1. Comparing the Bayes Factor with the fixed threshold c = 1 is equivalent to comparing the posterior mass of M 0 with 1/2. For the non-local prior we use the method of moment proposed in Rossell and Telesca (2017) with parameter fixed as proposed in their paper, i.e. The form of the proposed prior is displayed in Figure 1. We easily derive the Bayes-Factor associated with this prior Here again we shall compare it to 1. For the method proposed in this paper, we chose as a discrepancy measure D(θ, M 0 ) = |θ|. We now have to calibrate τ n such that the test satisfies (5)-(6). We shall see in the following Theorem 1 that in this case, choosing τ n = u n n −1/2 for any u n → +∞ will ensure consistency. To calibrate u n , note that we have where σ 2 x = (n + σ −2 ) −1 and m x = nX n σ 2 x are the posterior mean and the posterior variance respectively and Φ is the cumulative distribution function of a standard Gaussian. To get low type I error while not deteriorating the separation rate we choose u n = max(Φ −1 (0.05), log(log(n))). We run all three methods on simulated data generated for three different parameters θ 0 , namely 2 log(n)/n, log(n)/n and 0. The first two parameters are getting closer and closer to the boundary between hypotheses as the number of observations grows while the third is in M 0 . The results are presented in Figure 2. We observe that even when the parameter is at a reasonable distance from M 0 , the non-local prior seems to penalize too much, and thus will contract on the simpler model, while the other approaches do detect the parameter as non-zero. When the parameter is at a distance log(n)/n then the usual Bayes Factor does not clearly detect the parameter has non-null, while the proposed method asymptotically does. The price to pay is a slower decay of the type I error of the order of log(n) to be compared to a exponential decay for the Bayes Factor. Figure 2: Proportion of test that classifies the parameter as non-null for N = 5 × 10 4 replications of the test. The Bayes Factor obtained with Gaussian and non-local priors are compared to 1. For the discrepancy method τ n = max[1.96, log(log(n))]/ √ n.

Un-separated hypotheses
We now consider a case where both the null and the alternative have similar sizes. Using the same setting as before we now test for H 0 : θ ≤ 0 versus H 1 : θ > 0, and thus M 0 = (−∞, 0] and M 1 = (0, +∞), using the same prior π as before. To compute the usual Bayes Factor, we thus have the following on M 0 and M 1 respectively: We can compute Bayes Factor From this formulation, we see that the Bayse Factor B 0,1 will not detect the parameters θ = 0 (which is at the boundary between M 0 and M 1 ) as belonging to M 0 , leading to poor frequentist performances of such a test in this case. To compare this approach to the non-local prior method we construct a prior π M 1 on the alternative using the method of moment described in Rossell and Telesca (2017) that will enforce a separation of the hypotheses. We consider the following modification of the prior π M 1 (θ) = θ 2 τ π 1 (θ/τ ). A plot of this prior is given in Figure 3. We can then compute the Bayes Factor One can easily compute the marginals using simple Monte-Carlo integration. Here again we will compare the Bayes Factor B M 0,1 with the fixed threshold 1. In order to compare these approaches with the one proposed in this paper, we first need to find a discrepancy measure D and calibrate the threshold τ . We choose D(θ, M 0 ) = min(θ, 0). Using a simple standard Gaussian prior, we can easily calibrate the threshold τ n using the same approach as before. We have that for all sequence u n that goes to infinity as slowly as needed, τ n = Cu n n −1/2 leads to a separation rate ρ n ≤ 2τ n . We now calibrate the constant C and the sequence u n based on heuristics. Again, denotingσ 2 x = (n +1/σ 2 ) −1 and m x = nX nσ2 x the posterior mean and posterior variance respectively, we have We then choose again u n = max(log(log(n)), Φ −1 (0.05)) which insure consistency while not deteriorating the separation rate too much.
Similarly to what we did in the previous section, we compare the results obtained with the three different methods on simulated data generated with a parameter θ 0 = 2 log(n)/n, log(n)/n and 0. The results are given in Figure 4. We observe that the Bayes Factor constructed using the non-local prior of Rossell and Telesca (2017) has difficulties to detect parameters in M 1 but close to M 0 as positive due to the penalization induced by the prior. On the other hand the usual Bayes Factor based on the simple conjugate Gaussian prior do not detect θ = 0 in M 0 while the other two methods have good asymptotic behaviour. We thus see that both the usual Bayes Factor and the approach based on non-local priors have difficulties to detect parameters at or near the boundary. More worryingly, the behaviour of these methods near the boundary strongly depends on the sets M 0 and M 1 and on the prior contraction on these sets. On the other hand the proposed method, although a little less efficient for finite sample sizes, does detect the parameter at or near the boundary. An easy fix in this particular setting to get better results for the Bayes Factor and non-local priors would be to elicit M 0 = {0}. Nevertheless the same behaviours exposed in the previous section would remain. Furthermore, for more complex hypotheses, it could be difficult to single out the boundary as a separated hypothesis. We shall see in the next section that for these examples, the proposed method attains asymptotically the minimax separation rate.

Testing parametric hypotheses
Consider the following parametric model for some fixed p > 0, Y n ∼ P n θ , for θ ∈ Θ ⊂ R p . For a fixed subset Θ 0 ⊂ Θ we want to test H 0 : θ ∈ M 0 = Θ 0 , versus H 1 : θ ∈ M 1 = Θ ∩ Θ c 0 . This problem has been widely studied in the Bayesian literature (see Robert, 2007, for instance).
In this simple case, the following theorem gives a calibration for the threshold τ n in (3) such that the testing procedure satisfies condition (5) and (6), and gives an upper bound for the separation rate ρ n . Theorem 1. Let Π be a prior distribution on Θ and d be a metric on the parameter space Θ. Assume that for some positive sequence n we have Condition (8) is the standard concentration property of the posterior which is known to hold for regular models with n = n −1/2 u n where u n is any positive sequence increas-ing to infinity (see for instance Ghosal et al., 2000a;Ghosal and van der Vaart, 2007). In this case the separation rate ρ n for the proposed test is the minimax separation rate n −1/2 up to some factor u n . The proof of this Theorem is postponed to Section 6.1. From the proof of Theorem 1, we can also derive an upper bound for sup θ∈Θ0 E n θ [δ π n (τ )] and sup θ∈Θ,d(θ,Θ0) . Under some regularity assumptions on the models, we get that the type I and type II error can be uniformly bounded by e −Cu 2 n for some constant C > 0. Choosing u n of the order of log(n) will thus give polynomial decay uniformly for both errors. As argued in Johnson and Rossell (2010), the Bayes Factor usually contracts at an exponentially fast rate, for a true alternative. However, this is to be balanced with the fact that here the proposed control is uniform over all θ ∈ Θ such that d(θ, Θ 0 ) > 2 n .

Detection of signal in white noise
We now apply our approach to the problem of detecting signal in the standard white noise model. This problem is closely related to the well studied goodness-of-fit testing problem, where one is interested in testing a parametric hypothesis versus a nonparametric one. Here again this problem has been extensively studied in the literature. Goodness of fit testing have been considered both from a frequentist and Bayesian point of view, see for instance Ingster and Suslina (2003), Dass and Lee (2004) or see Tokdar et al. (2010) for a review. The specific problem of detection of signal in white noise has also been treated in Ingster (1987); Lepski and Spokoiny (1999); Lepski and Pouet (2008).
Here we consider the equivalent infinite Gaussian sequence model where f = (f i ) ∈ l 2 = {g, i g 2 i < ∞}. Similarly to Lepski and Spokoiny (1999) we test f = 0 against a Sobolev ellipsoid of fixed smoothness s, W s We consider a conjugate Gaussian prior on as in Section 3 of Castillo and Rousseau (2015). For k n = n 2/(4s+1) and all increasing sequence s = (s 1 , s 2 , . . .) such that s kn ≤ n 4s/(4s+1) and kn i=1 1/(n + s i ) ≤ ρ n /4 we define Π by We choose the discrepancy measure D(f, M 0 ) to be the l 2 norm of f , ||f || 2 = ( ∞ i=0 f 2 i ) 1/2 . The following Theorem gives a calibration for the threshold τ n in (3) and an upper bound for the separation rate of our testing procedure.
Theorem 2. Let Y n be sample from (9) and consider a prior on f as defined in (10).
Let v n be any sequence increasing to infinity and let ρ n = v n n −2s/(4s+1) and τ n be such that τ 2 n = Cρ n /2 + k n /n + kn i=1 1 n+si for some positive constant C. Setting d to be the l 2 norm, the decision rule δ π n as defined in (4) satisfies Here again the separation rate ρ n of the test is the minimax separation rate as shown in Ingster (1987). An interesting aspect of this test is that it does not rely on the precise estimation of the true underlying function but rather on the semiparametric estimation of D(f, M 0 ) which allows us to obtain a separation rate polynomially faster than the estimation rate for Sobolev alternative. It is to be noted that the prior (10) is not optimal for the estimation problem but leads to the best possible separation rate for the testing problem. The proof of this theorem is postponed to Section 6.2.

Shape constraints testing 4.1 Statistical setting
We consider the nonparametric fixed design regression problem with Gaussian residuals for n > 0 where σ > 0 and ( 1 , . . . , n ) is a sequence of independent standard Gaussian random variable. The approach presented in this paper are also valid for non-uniform design and random design under additional condition but considering these cases will only make the computations more complex and will thus not be treated here. For this problem, we consider a piecewise constant prior distribution on the regression function f and a prior with density π σ with respect to the Lebesgue measure on σ. More precisely, for We choose the following form for the prior on f dΠ(f ) = π k (k)π ω (ω 1 , . . . , ω k |k)dλ k (ω 1 , . . . , ω k )dν(k), where λ k is the Lebesgue measure on R k and ν the counting measure on N. Note that a similar prior has been studied in Holmes and Heard (2003) for modelling monotone functions. Here again, although this prior is not well suited for the estimation problem, it gives good theoretical and practical results for testing the shape constraints studied in this paper as shown bellow. For simplicity we consider a product form for π ω , π ω (ω 1 , . . . , ω k |k) = k i=1 g(ω i ) where g is a density on R. In addition we assume that the following conditions holds C1 the density π σ is bounded and continuous and π σ (σ) > 0 for all σ ∈ (0,σ), C2 the density g is continuous positive on R and bounded from above.
C3 π k is such that there exists positive constants C d and C u such that where L(k) is either log(k) or 1.
The condition C1 and C2 are mild and are satisfied for a large variety of distributions. In Section 5.1 we will take g to be a Gaussian density and π σ to be a inverse gamma density. Simple algebra shows that for this choice of prior, both conditions are satisfied. Condition C3 is a usual condition when considering mixture models with random number of components, see e.g. Rousseau (2010) and is satisfied by Poisson or Geometric distribution for instance.
Define the sets These problem has been considered in the literature in Juditsky and Nemirovski (2002) and Baraud et al. (2005) for instance. Note that with a prior chosen as in (13) we have π(F + ) > 0 and π(F¡ (K)) > 0. Furthermore, if the true regression function f 0 is in F + or F¡ (K) then the piecewise constant function with k pieces of the form (13) which minimizes the Kullback Leibler divergence with P f0 will also be in F + , respectively F¡ (K), for all k.
We then study the posterior separation rate of the test with respect to the metric For each test we compute the separation rate of our procedure and compare it with the minimax separation rates, which is n −α/(2α+1) in both cases.
Our approach could also apply to other types of shape constraints such as convexity or unimodality using similar methods.

Testing for positivity
We first consider positivity constraints. There exist a few methods to test for positivity in a nonparametric setting, see for instance Baraud et al. (2005). We propose the following discrepancy measure for D in (3) We immediately have that D(f, F + ) ≤ 0 if and only if f ∈ F + . Here the discrepancy measure can be related to the supremum distance with the set of positive functions. For piecewise constant functions f ω,k , D(f ω,k , F + ) has the simple expression D(f ω,k , F + ) = − min 1≤i≤k (ω i ). This turn out to be particularly useful for the calibration of the threshold τ n . Let G k be the set of piecewise constant function with k pieces. The idea of the calibration of τ n is the following. In the model G k , the a posteriori uncertainty for estimating ω = (ω 1 , . . . , ω k ) is of order (k/n) 1/2 . Hence any positive function f ω,k such that for all i, ω i ≥ O{(k/n) 1/2 } might be detected as possibly positive. We thus choose a threshold τ k n for each model G k of similar order. The results are presented in the following theorem.

Testing for monotonicity
We now consider monotonicity constraints. Tests for monotonicity have been well studied in the frequentist literature, see for instance Baraud et al. (2003Baraud et al. ( , 2005; Ghosal et al. (2000b); Bowman et al. (1998). In a Bayesian setting, only Scott et al. (2015) proposed a test for monotonicity using non-local priors. Define the discrepancy measure between f and F + as Here again when considering piecewise constant functions f ω,k , (17) we get the simple formulation D(f ω,k , F + ) = max 1≤i≤j≤k (ω j − ω i ) which allows for a simple calibration of τ n in a similar way as in Section 4.2.
Theorem 4. Under the assumptions C1 to C3, for a fixed constant M 0 > 0, setting τ = τ k n = M 0 {k log(n)n −1 } 1/2 and δ π n the testing procedure defined in (4), for all K > 0 then there exists some M > 0 such that uniformly for α ∈ [α 0 , 1], ∀α 0 > 0 Neither the prior nor the threshold depend on the regularity α of the regression function under the alternative. Moreover for all α ∈ (0, 1], the separation rate ρ n (α) is the minimax separation rate up to a log(n) term. Thus our test is almost minimax adaptive. The log(n) term seems to follow from our definition of the consistency where we do not fix a level for the Type I or Type II error contrariwise to the frequentist procedures. The conditions on the prior are quite loose, and are satisfied in a wide variety of cases. The constant M 0 does not influence the asymptotic behaviour of our test but has a great influence in practice for finite n. A practical way of choosing M 0 is given in Section 5.1.

Prior specification and sampling strategy
Conditions on the prior in Theorem 4 are satisfied for a wide variety of distributions.
However, when no further information is available, some specific choices can ease the computations and lead to good results in practice. We present in this section such a specific choice for the prior and a way to calibrate the hyperparameters. We also fix γ 0 = γ 1 = 1/2 in the definition of δ π n . A practical default choice is the usual conjugate prior, given k, i.e. a Gaussian prior on ω with variance proportional to σ 2 and an Inverse Gamma prior on σ 2 . This will considerably accelerate the computations as sampling under the posterior is then straightforward. Condition (14) on π k is satisfied by the two classical distributions on the number of parameters in a mixture model, namely the Poisson distribution and the Geometric distribution. It seems that choosing a Geometric distribution is more appropriate as it is less spiked. We thus choose for λ, a, b > 0, m ∈ R and μ > 0 IG(a, b), Standard algebra leads to a close form for the posterior distribution up to a normalizing constant. Let n i = Card{j, j/n ∈ [(i − 1)/k, i/k)}, we denotẽ We can thus compute the posterior distribution of k up to a constant. We will thus be able to sample from π k (k|Y n ) using a truncated approximation of the posterior.
In the examples we choose to truncate at some k 0 ≤ n. We then compute the posterior distribution of ω and σ given k Given k, sampling from the posterior is thus straightforward.
A crucial hyperparameter that needs to be calibrate is for M 0 the constant in τ . A close inspection of the proofs (in particular the proof of Lemma 2) using the fact that we have a Gaussian posterior, gives us that taking τ n = log(k/n)kσ 2 n + kμσ 2 , would induce the desired results.

Simulated examples
In this section we run our testing procedure on simulated data to study the behaviour of our test for finite sample sizes. We first examine the behaviour of the proposed test for positivity on an example that illustrate that the separation rate of the test is indeed upper bounded by (log(n)/n) α/(2α+1) up to some constant. We then compare our test for monotonicity to other methods proposed in the literature, and get comparable results for finite sample size.

Testing for positivity
Consider the test for positivity proposed in Section 4.2. Similarly to the examples of Section 2, we will consider a sequence of function that are in M 1 i.e. not positive, but are getting closer and closer to the boundary. More precisely we take f n (x) = 10ρ n (|x − 0.1| − 0.1)I |x−0.5|<0.1 , and thus ρ n = d ∞ (f, F + ). Plots of f n for different values of n are given in Figure 5. Since for all n this function is piecewise linear, we thus have that f ∈ H(α, L), with α = 1. Given Theorem 3, we have that for some constant M large enough, the test should be consistent for f n if ρ n > M(log(n)/n) 1/3 .
We run our test on simulated data generated from the model (12) with f = f n for different values of M and with f = 0 that lies at the boundary between hypotheses. The results are given in Figure 6. We observe that the test detects parameter at the boundary as positive, even for moderate values of n. In addition, for M > 0.4, the function f n are detected as non-positive, and the asymptotic regime is attained around n = 2000, while for M < 0.4 the functions f n are not detected as non-positive. This indicates that the test does separate the hypotheses at the rate at least 0.4(log(n)/n) 1/3 , and we thus recover the results from Theorem 3.

Testing for monotonicity
We now compare our approach to test for monotonicity with the ones proposed in the literature. We consider the following nine functions adapted from Scott et al. (2015)   and Baraud et al. (2003) and plot in Figure 7. The functions f 1 to f 6 are clearly not in F¡ (K) with K = 2. The function f 7 has a small bump around x = 0.5 which can be seen as a local departure from monotonicity. This function is thus expected to be difficult to detect for small datasets given our parametrization. The function f 9 is a completely flat function and belongs to F¡ (K).
For several values of n, we generate N = 500 replicates of the data Y n = {y i , i = 1, . . . , n} from model (12). For each dataset, we approximate π{D(f ω,k , F¡ ) > τ k n |Y n } based on K = 5 × 10 4 samples from the posterior and reject the null if The results are given in Table 1.
For all the considered functions, the computational time is reasonable even for large values of n. For instance, for f 1 , we require less than 2 seconds to perform the test for n = 2500 using a simple R script available on demand. We compare our results with the ones obtained in Scott et al. (2015) for the Gaussian prior and the methods proposed by Baraud et al. (2003) and Akakpo et al. (2014). The results are given in Table 1. The proposed method is a little less efficient than the one based on non-local prior in average, but it seems to perform better for some functions (e.g. f 3 ). When n grows, the percentage of correctly classified function goes to 1 as predicted by the theory.  (2015) with Gaussian prior and Discrepancy for the proposed method.
We deduce (6) directly from condition (8) which ends the proof.

Proof for the detection of signal in white noise
We fist prove that with the proposed calibration of τ n the decision rule (4) satisfies (5). In the sequel c will denote a generic absolute constant that may change from one line to another. We want to bound Π(||f || 2 > τ 2 n |Y n ) when f 0 = 0 for τ 2 n = ρ n /2 + k n /n + kn i=1 1 n+si . For all t ≤ 2n we have, using the Chernoff bound we have with P n 0 -probability that goes to 1 for c large enough, which give the result We now state an auxiliary result that will be needed for the remainder of the proof. Define H(s, ρ) = {f ∈ W s 2 (L), ||f || 2 > ρ} Lemma 1. Let k n = n 2/(4s+1) and consider 4s/(4s+1) where v n → ∞ slowly with n then for some The proof of this Lemma can be found in the supplementary materials (Salomond, 2017).
We now end the proof by showing that δ π n satisfies (6). We want to bound Π(||f || 2 ≤ τ n |Y n ) when f 0 ∈ H(s, ρ n ). For all n/2 > t > 0 and all increasing sequence s i such that s kn ≤ n 4s/(4s+1) we have for Y n such that P n 0 (Z n > ρ n ), using the Chernoff bound and the fact that tρ n 4 + 2k n s kn + t n 2 + k n t 2 n 2 ≤ c, for some c for v n large enough by taking t ρ −1 n and s kn ≤ t 2 , where the second line comes from the fact that 1 (n+si)(n+si+2t) ≥ 1 n 2 (1 − 2 si+t n ).

Auxiliary result
For all functions f 0 in L ∞ ([0, 1]) denote by P 0 the probability distribution of Y n generated with f = f 0 and f ω 0 ,k the function of G k the set of piecewise constant functions with k pieces, that minimizes the Kullback Leibler divergence between P f and P 0 . Standard computation gives The following lemma gives some concentration result for f ω,k that will be useful for the study of D(f, F + ) or D(f, F¡ (K)) respectively for both monotonicity and positivity constraints.
Lemma 2. Let M be a positive constant. Let Π be as define in (13) such that it satisfies condition C1, C2 and C3. Denote by ω 0 the minimizer of the Kulback-Leibler divergence KL(P f ω,k , P 0 ). Then if there exists a constant C such that Π(σ 0 /σ < C|Y n ) = o P n 0 (1) for a constant A > 0 large enough, we have where ξ k n = [{k log(n)}/n] 1/2 for all fixed positive γ 0 and γ 1 .
The proof of this lemma is given in the supplementary materials. We also state the following lemma that gives a control on the posterior distribution of k.
Lemma 3. Let k n = n 2 n / log(n) if L(k) = log(k) and k n = n 2 n if L(k) = 1 where n is either n (F) if f 0 ∈ F or n (α) if f 0 ∈ H(α, L). For C 1 a positive constant that my depend on K or L, let K n = {k ≤ C 1 k n }. If Π is define as in (13) and satisfies C1 or C1', C2 and C3 we have Π (K c n |Y n ) ≤ o P n 0 (1).
The proof is given in the supplementary materials.
Note that if σ 0 ≤σ we get directly for C large enough that Π(σ 0 /σ < C|Y n ) = o P n 0 (1) Applying Lemma 2 gives us immediately (5) for M 0 large enough. We now show that δ π n (τ n ) satisfies (6) with ρ = ρ n = M {n/ log(n)} −α/(2α+1) v n for v n as in Theorem 3. First note that for f 0 such that f 0 ∈ H(α, L) d ∞ (f, F + ) > ρ n we have for all k − min We thus deduce the following upper bound for Π{D(f, F + ) ≤ τ n |Y n }: We ends the proof by applying Lemma 2 together with Lemma 3.
of the hypotheses at hand. The test obtained using this approach have been shown to be consistent and to achieve the minimax separation rates when testing parametric hypotheses and in some nonparametric settings.