Bayesian Bandwidth Test and Selection for High-dimensional Banded Precision Matrices

Assuming a banded structure is one of the common practice in the estimation of high-dimensional precision matrix. In this case, estimating the bandwidth of the precision matrix is a crucial initial step for subsequent analysis. Although there exist some consistent frequentist tests for the bandwidth parameter, bandwidth selection consistency for precision matrices has not been established in a Bayesian framework. In this paper, we propose a prior distribution tailored to the bandwidth estimation of high-dimensional precision matrices. The banded structure is imposed via the Cholesky factor from the modified Cholesky decomposition. We establish the strong model selection consistency for the bandwidth as well as the consistency of the Bayes factor. The convergence rates for Bayes factors under both the null and alternative hypotheses are derived which yield similar order of rates. As a by-product, we also proposed an estimation procedure for the Cholesky factors yielding an almost optimal order of convergence rates. Two-sample bandwidth test is also considered, and it turns out that our method is able to consistently detect the equality of bandwidths between two precision matrices. The simulation study confirms that our method in general outperforms the existing frequentist and Bayesian methods.


Introduction
Estimating a large covariance or precision matrix is a challenging task in both frequentist and Bayesian frameworks. When the number of variables p is larger than the sample size n, the traditional sample covariance matrix does not provide a consistent estimate of the true covariance matrix (Johnstone and Lu;2009), and the inverse Wishart prior leads to the posterior inconsistency 2018). To overcome this issue, various restricted classes of matrices have been investigated such as the bandable matrices (Bickel and Levina;Hu and Negahban;2014;, sparse matrices 2012a;Xiang et al.;Cao et al.; and low-dimensional structural matrices (Fan et al.;Pati et al.;2014;Gao and Zhou;. In this paper, we focus on banded precision matrices, where the banded structure is encoded via the Cholesky factor of the precision matrix. We are in particular interested in the estimation of the bandwidth parameter and construction of Bayesian bandwidth tests for one or two banded precision matrices. Inference of the bandwidth is of great importance for detecting the dependence structure of the ordered data. Moreover, it is a crucial initial step for subsequent analysis such as linear or quadratic discriminant analysis. Bandwidth selection of the high-dimensional precision matrices has received increasing attention in recent years. An et al. (2014) proposed a test for bandwidth selection, which is asymptotically normal under the null hypothesis and has a power tending to one. Based on the proposed test statistics, they constructed a backward procedure to detect the true bandwidth by controlling the familywise errors. Cheng et al. (2017) suggested a bandwidth test without assuming any specific parametric distribution for the data and obtained a result similar to that of An et al. (2014).
In the Bayesian literature, Banerjee and Ghosal (2014) studied the estimation of bandable precision matrices which include the banded precision matrix as a special case. They derived the posterior convergence rate of the precision matrix under the G-Wishart prior (Roverato;2000). Lee and Lee (2017) considered a similar class to that of Banerjee and Ghosal (2014), but assumed bandable Cholesky factors instead of bandable precision matrices. They showed the posterior convergence rates of the precision matrix as well as the minimax lower bounds.
In both works, posterior convergence rates were obtained for a given (fixed) bandwidth, and the posterior mode was suggested as a bandwidth estimator in practice. However, no theoretical guarantee is provided for such estimators. Further, no Bayesian bandwidth test exists for oneor two-sample problems.
This gap in the literature motivates us to investigate theoretical properties related to the general problem of bandwidth test and selection, and propose estimators or tests with theoretical guarantees. In this paper, we use the modified Cholesky decomposition of the precision matrix and assume banded Cholesky factors. The induced precision matrix also has banded structure.
The key difference from Lee and Lee (2017) is on the choice of prior distributions which will be introduced in Section 2.3. In addition, we focus on bandwidth selection and tests, while Lee and Lee (2017) mainly studied the convergence rates of the precision matrix for a given or fixed bandwidth.
There are two main contributions of this paper. First, we suggest a Bayesian procedure for banded precision matrices and prove the bandwidth selection consistency (Theorem 3.1) and consistency of the Bayes factor (Theorem 3.2). To the best of our knowledge, our work is the first that has established the bandwidth selection consistency for precision matrices under a Bayesian framework, which implies that the marginal posterior probability for the true bandwidth tends to one as n → ∞. Cao et al. (2017) proved strong model selection consistency for the sparse directed acyclic graph models, but their method is not applicable to the bandwidth selection problem since it is not adaptive to the unknown sparsity. Second, we also prove the consistency of the Bayes factor for two-sample bandwidth testing problem (Theorem 3.3) and derived the convergence rates of the Bayes factor under both the null and alternative hypotheses. Our method is able to consistently detect the equality of bandwidths between two different precision matrices. To the best of our knowledge, this is also the first consistent two-sample bandwidth test result in both frequentist and Bayesian literature. The existing literature (frequentist) focused only on the one-sample bandwidth testing 2014;Cheng et al.;.
The rest of the paper is organized as follows. Section 2 introduces the notations, model, priors and assumptions used. Section 3 describes main results of this paper: bandwidth selection consistency and convergence rates of one-and two-sample bandwidth tests. Simulation study and real data analysis are presented in Section 4 to show the practical performance of the proposed method. In Section 5, concluding remarks and topics for the future work are given. The appendix includes a result on the nearly optimal estimation of the Cholesky factors, and proofs of main results.

Notations
For any real numbers a and b, we denote a ∧ b and a ∨ b as the minimum and maximum of a and b, respectively. For any sequences a n and b n , we denote a n = o(b n ) if a n /b n → 0 as n → ∞. We write a n b n , or a n = O(b n ), if there exists an universal constant C > 0 such that a n ≤ Cb n for any n. We define vector 2 -and ∞ -norms as a 2 = ( p j=1 a 2 j ) 1/2 and a ∞ = max 1≤j≤p |a j | for any a = (a 1 , . . . , a p ) T ∈ R p . For a matrix A, the matrix ∞ -norm is defined as A ∞ = sup x ∞=1 Ax ∞ . We denote λ min (A) and λ max (A) as the minimum and maximum eigenvalues of A, respectively.

Gaussian DAG Models
When the random variables have a natural ordering, one common approach for the estimation of high-dimensional covariance (or precision) matrices is to adopt banded structures. One popular model to incorporate banded structure is a Gaussian directed acyclic graph (DAG) model in which the bandwidth can be encoded by the Cholesky factor via the modified Cholesky decomposition (MCD) below.
A directed graph D = (V, E) consists of vertices V = {1, . . . , p} and directed edges E. For any i, j ∈ V , we denote (i, j) ∈ E as a directed edge i → j and call i a parent of j. A DAG is a directed graph with no directed cycle. In this paper, we assume a parent ordering is known, where i < j holds for any parent i of j in a DAG D, which has been commonly used in the for any i = 1, . . . , p, where pa i (D) is the set of all parents of i.
We consider a Gaussian DAG model over some graph D, where Ω n = Σ −1 n is a p × p precision matrix and X i = (X i1 , . . . , X ip ) T ∈ R p for all i = 1, . . . , n. For any positive definite matrix Ω n , there exist unique lower triangular matrix A n = (a jl ) and where a jj = 0 and d j > 0 for all j = 1, . . . , p, by the MCD. We call A n the Cholesky factor. It can be easily shown that a jl = 0 if and only if l ∈ pa j (D), so the Cholesky factor A n uniquely determines a DAG D. Define k as the bandwidth of a matrix if the off-diagonal elements of the matrix farther than k from the diagonal are all zero. If the bandwidth of the Cholesky factor is k, model (1) can be represented as for all i = 1, . . . , n, where a (k) j = (a jl ) (j−k) 1 ≤l≤j−1 ∈ R k j , (j −k) 1 = 1∨(j −k) and k j = k∧(j −1). The above representation enables us to adopt priors and techniques in the linear regression literature.
We are interested in the consistent estimation and hypothesis test of the bandwidth k of the precision matrix. From the decomposition (2), the bandwidth of A n is k if and only if the bandwidth of Ω n is k. Thus, we can infer the bandwidth of the precision matrix by inferring that of the Cholesky factor.

Prior Distribution
LetX j ∈ R n and X j(k) ∈ R n×k j be sub-matrices consisting of jth and (j − k) 1 , . . . , (j − 1)th columns of X n = (X T 1 , . . . , X T n ) T ∈ R n×p , respectively. We suggest the following prior distribution k ∼ π(k), k = 0, 1, . . . , R n for some positive constants γ, τ and positive sequence R n , where a (k) The conditional prior distribution for a (k) j is a version of the Zellner's g-prior (Zellner;1986;Martin et al.; in the linear regression literature. Note that model (3) is equivalent tõ Due to the conjugacy, it enables us to calculate the posterior distribution in a closed form up to some normalizing constant. The prior for d j is carefully chosen to reduce the posterior mass towards large bandwidth k. We emphasize here that one can use the usual non-informative prior π(d j ) ∝ d −1 j , but necessary conditions for the main results in Section 3 should be changed. This issue will be discussed in more details in the next paragraph.
We assume the prior π(k) to have the support on 0, 1, . . . , R n . We will introduce condition (A4) for π(k) and the hyperparameters in Section 2.4, and show that π(k) ∝ 1 is enough to establish the main results in Section 3. The priors (4)-(6) lead to the following joint posterior distribution, provided that τ < 1, where d (k) j =X T j (I −P jk )X j /n andP jk = X j(k) (X j(k) T X j(k) ) −1 X j(k) T . The marginal posterior π(k | X n ) consists of two parts: the penalty on the model size, π(k) p j=2 (1 + 1/γ) −k j /2 , and the estimated residual variances, p j=2 ( d (k) j ) −(1−τ )n/2 . Thus, priors (4) and (5) naturally impose the penalty term p j=2 (1 + 1/γ) −k j /2 for the marginal posterior π(k | X n ). The effect of prior π(d j ) ∝ d τ n/2−1 j appears in marginal posterior for k. Compared with the j ) −n/2 . Thus, it reduces the posterior mass towards large bandwidth k since d (k) j decreases as k grows. We conjecture that, at least for our prior choice of π(a (k) j | d j , k) with a constant γ > 0, this power adjustment of d (k) j is essential to prove the selection consistency for k. Suppose we use the prior π(d j ) ∝ d −1 j . Similar to the proof of Theorem 3.1, to obtain the selection consistency, we will use the inequality and show that the expectation of the right hand side term converges to zero for any k = k 0 as n → ∞, where k 0 is the true bandwidth. Note that unless π(k 0 | X n ) shrinks to zero, the inequality causes only a constant multiplication. The most important task is dealing with the last term in (8) Yang et al. (2016) and Lemma 4 in Shin et al. (2015)) suggest an upper bound p α(k j −k 0j ) with high probability for any 2 ≤ j ≤ p, k > k 0 and some constant α > 0. In this case, the hyperparameter γ should be of order p −α for some constant α > 2α to make the right hand side in (8) converge to zero. Then, with the choice γ p −α , condition (A2), which will be introduced in Section 2.4, should be modified by replacing 1/n with (log p)/n to achieve the selection consistency. In summary, the main results in this paper still hold for the prior π(d j ) ∝ d −1 j , but it requires stronger conditions due to technical reasons. We state the results using prior (5) to emphasize that the bandwidth selection problem essentially requires weaker condition than the usual model selection problem.
Remark If we adopt the fractional likelihood (Martin et al.;, we can achieve the selection consistency (Theorem 3.1) with the prior π(d j ) ∝ d −1 j instead of (5) under similar conditions in Theorem 3.1. However, with the fractional likelihood, we cannot calculate the Bayes factor which is essential to describe the Bayesian test results in Sections 3.2 and 3.3.
Condition (A2) is called the beta-min condition. If we assume that 0n = O(1) and ζ 0n = O(1), in our model it only requires the lower bound of the nonzero elements to be of order O(1/ √ n). In the sparse regression coefficient literature, the lower bound of the nonzero coefficients is usually assumed to be log p/n up to some constant Yang et al.;2016;Martin et al.;. Here, the √ log p term can be interpreted as a price coming from the absence of information on the zero-pattern. Condition (A2) reveals the fact that, under the banded assumption, we do not need to pay this price anymore.
Condition (A3) ensures that the true bandwidth k 0 lies in the support of π(k). The condition  (1), respectively. Here we give some examples for π(k) satisfying conditions (9) and (10) with C −1 γ,τ < ξ < C −1 γ,M bm , it satisfies the conditions. Furthermore, if we choose ξ = 1, which leads to the conditions are met if τ > (1 Remark In the sparse linear regression literature, a common choice for the prior on the unknown sparsity k is π(k) ∝ p −ck for some constant c > 0. See Castillo et al. (2015), Yang et al. (2016) and Martin et al. (2017). If we adopt this type of the prior into the bandwidth selection problem, a naive approach is using π(k) ∝ p −ck for each row of the Cholesky factor: it results To obtain the strong model selection consistency, in this case, M bm in condition (A2) has to be M bm = M bm log p for some constant M bm > 0. Thus, it unnecessarily requires stronger beta-min condition, which can be avoided by using π(k) like (11) or (12).

Bandwidth Selection Consistency
When there is a natural ordering in the data set, estimating the bandwidth of the precision matrix is important for detecting the dependence structure. It is a crucial first step for the subsequent analysis. In this subsection, we show the bandwidth selection consistency of the proposed prior. Theorem 3.1 states that the posterior distribution puts a mass tending to one at the true bandwidth k 0 . Thus, we can detect the true bandwidth using the marginal posterior distribution for the bandwidth k. We call this property the bandwidth selection consistency.
Theorem 3.1 Consider model (1) and priors (4)-(6). If conditions (A1)-(A4) are satisfied, then we have Informed readers might be aware of the recent work of Cao et al. (2017) considering the sparse DAG models. It should be noted that their method is not applicable to the bandwidth selection problem. The key issue is that their method is not adaptive to the unknown sparsity corresponding to the true bandwidth k 0 in this paper: to obtain the selection consistency, the choice of hyperparameter should depend on k 0 , which is unknown and of interest. Furthermore, they required stronger conditions in terms of dimensionality p, true sparsity k 0 , eigenvalues of the true precision matrix and beta-min for the strong model selection consistency.
Remark The bandwidth selection result does not necessarily imply the consistency of the Bayes factor. Note that prior (4) and priors (4), (5) and π(k) ∝ 1 lead to the same marginal posterior for k. Thus, the above priors also achieve the bandwidth selection consistency in Theorem 3.1. However, (13) might be inappropriate when the Bayes factor is of interest, because the ratio of normalizing terms induced by prior (13) (C 0 and C 1 in (14)) have a non-ignorable effect on the Bayes factor.

Consistency of One-Sample Bandwidth Test
In this subsection, we focus on constructing a Bayesian bandwidth test for the testing problem H 0 : k ≤ k versus H 1 : k > k for some given k . A Bayesian hypothesis test is based on the Bayes factor B 10 (X n ) defined by the ratio of marginal likelihoods, We are interested in the consistency of the Bayes factor which is one of the most important asymptotic properties of the Bayes factor (Dass and Lee;2004). A Bayes factor is said to be consistent if B 10 (X n ) converges to zero in probability under the true null hypothesis H 0 and B 10 (X n ) −1 converges to zero in probability under the true alternative hypothesis H 1 .
Although the Bayes factor plays a crucial role in the Bayesian variable selection, its asymptotic behaviors in the high-dimensional setting are not well-understood (Moreno et al.;. Few works studied the consistency of the Bayes factor in the high-dimensional settings (Moreno et al.;Wang and Sun;2014;2016), which only focused on the pairwise consistency of the Bayes factor. They considered the testing problem for any k (0) < k (1) , where k is the number of nonzero elements of the linear regression coefficient. Note that a Bayes factor is said to be pairwise consistent if the Bayes factor B 10 (X n ) is consistent for any pair of simple hypotheses H 0 and H 1 .
We focus on the composite hypotheses H 0 : k ≤ k and H 1 : k > k rather than simple hypotheses. To conduct a Bayesian hypothesis test, prior distributions for both hypotheses should be determined. Denote the prior under the hypothesis H i as π i (A n , D n , k) for i = 0, 1.
Since the difference between two hypotheses comes only from the bandwidth, we will use the same conditional priors for A n and D n given k, i.e. π i (A n , D n , k) = π i (k) π(A n , D n | k) for (4) and (5). We suggest using priors π 0 (k) and π 1 (k) such that where C 0 = k k=0 π(k) and C 1 = Rn k=k +1 π(k). Then, the Bayes factor has the following analytic form, where the marginal posterior π(k | X n ) is given in (7) up to some normalizing constant. Note that, the Bayes factor can be defined because both hypotheses have the same improper priors on D n . We will show that the Bayes factor is consistent for any composite hypotheses H 0 : k ≤ k and H 1 : k > k , which is generally stronger than the pairwise consistency of the Bayes factor.
If we assume that π 1 (k)/π 0 (k ) = O(1) for any k and k , then one can see that the consistency of the Bayes factor for hypotheses H 0 : k ≤ k and H 1 : k > k for any k implies the pairwise consistency of the Bayes factor for any pair of simple hypotheses H 0 : k = k (0) and H 1 : For given positive constants M bm , γ and τ ∈ (0, 0.4] and integers R n , k 0 and k , define where C γ,τ and C γ,M bm are defined in condition (A4). Theorem 3.2 shows the convergence rates of Bayes factors under each hypothesis. It turns out that π(k) = 1/(R n + 1) is sufficient for the consistency of the Bayes factor.
Theorem 3.2 Consider model (1) and hypothesis testing problem H 0 : k ≤ k versus H 1 : k > k . Assume priors (4) and (5) for π(A n , D n | k) and the bandwidth priors in (14) with π(k) = and under H 1 : k > k , Remark Note that if we use prior (11) with ξ = 1, the effect of the prior, C 0 /C 1 , can dominate the posterior ratio, π(k > k | X n )/π(k ≤ k | X n ) in the Bayes factor. Because the prior knowledge on the bandwidth is usually not sufficient, it is clearly undesirable. Moreover, the direction of effect is the opposite of the prior knowledge. k 0 ≤ n − 4, but assumed that the partial correlation coefficient between X ij and X i,j−k 0 given X i,j−k 0 +1 , . . . , X i,j−1 is of order o(n −1 ). It implies that max j |a 0,jj−k 0 | converges to zero at some rate. Thus, the nonzero elements a 0,jj−k 0 , j = k 0 + 1, . . . , p should converge to zero, which is somewhat unnatural. Rossell (2010, 2012) and Rossell and Rubio (2017) pointed out that the use of local alternative prior leads to imbalanced convergence rates for the Bayes factors, and showed that this issue can be avoided by using non-local alternative priors. However, interestingly, convergence rates for the Bayes factors in Theorem 3.2 yield similar order of rates under both hypotheses without using a non-local prior. Roughly speaking, the imbalance issue can be ameliorated by introducing the beta-min condition (Condition (A2)). To simplify the situation, consider the model ∼ N (0, σ 2 ). Suppose priors (4) and (5) are imposed on (β 1 , . . . , β k ) T and σ 2 given k. Consider hypotheses H 0 : k = k 1 and H 1 : k = k 2 , where k 1 < k 2 , and assume that the eigenvalues of X (1:k 2 ) are bounded and k 2 − k 1 → ∞ as n → ∞ for simplicity. Note that the prior for β (k 2 ) is a local alternative prior because π(β . Rossell (2010, 2012) and Rossell and Rubio (2017) assumed that β 2 min > c 1 for some constant c 1 > 0. In that case, B 10 (Y ) −1 decreases exponentially with n(k 2 − k 1 )c 1 , which causes the imbalanced convergence rates. However, if we assume β 2 min ≥ c 2 n −1 similar to condition (A2), B 10 (Y ) −1 decreases at rate O p (e −c 2 (k 2 −k 1 ) ) for some constant c 2 > 0. Thus, convergence rates for the Bayes factors have similar order under the both hypotheses.
The above argument does not mean that the non-local priors are not useful for our problem.
We note that the balanced convergence rates by using the beta-min condition is different from those by using the non-local prior. The former makes the rate of B 10 (Y ) −1 slower under H 1 , while the latter makes the rate of B 10 (Y ) faster under H 0 . Thus, the use of non-local priors might improve the rates of convergence for B 10 (Y ) under H 0 in Theorem 3.2. However, it will increase the computational burden and is unclear which rate one can achieve using the non-local prior under condition (A2), so we leave it as a future work.

Consistency of Two-Sample Bandwidth Test
Suppose we have two data sets from the models where Ω 1n 1 = (I p − A 1n 1 ) T D −1 1n 1 (I p − A 1n 1 ) and Ω 2n 2 = (I p − A 2n 2 ) T D −1 2n 2 (I p − A 2n 2 ) are the MCDs. Denote the bandwidth of Ω in i as k i for i = 1, 2. In this subsection, our interest is the test of equality between two bandwidths k 1 and k 2 , the two-sample bandwidth test. We consider the hypothesis testing problem H 0 : k 1 = k 2 versus H 1 : k 1 = k 2 and investigate the asymptotic behavior of the Bayes factor, where X n 1 = (X T 1 , . . . , X T n 1 ) T ∈ R n 1 ×p and Y n 2 = (Y T 1 , . . . , Y T n 2 ) T ∈ R n 2 ×p . Denote the priors under H 0 and H 1 as, respectively We suggest the following conditional priors π(A 1n 1 , D 1n 1 | k 1 ) and π(A 2n 2 , D 2n 2 | k 2 ) for any given k 1 and k 2 , where i,j ∈ R k ij is the nonzero elements in the jth row of A in i and D in i = diag(d i,j ) for i = 1, 2. Similar to the previous notations, we denote a where Y j(k 2 ) ∈ R n×k 2 is the sub-matrix consisting of (j − k 2 ) 1 , . . . , (j − 1)th columns of Y n . The priors on bandwidths are chosen as π 0 (k) = 1 R n + 1 , k = 0, 1, . . . , R n , This choice of priors leads to an analytic form of the Bayes factor, B 10 (X n 1 , Y n 2 ) = k 1 =k 2 π(k 1 | X n 1 )π(k 2 | Y n 2 ) k 1 =k 2 π(k 1 | X n 1 )π(k 2 | Y n 2 ) where the marginal posterior distributions π(k 1 | X n 1 ) and π(k 2 | Y n 2 ) are known up to some normalizing constants similar to (7). We denote Ω 0,in i as the true precision matrix with bandwidth k 0i for i = 1, 2 and assume that p tends to infinity as n = n 1 ∧ n 2 → ∞. Theorem 3.3 gives a sufficient condition for the consistency of the Bayes factor B 10 (X n 1 , Y n 2 ) by calculating the convergence rates. for Ω 0,1n 1 , Ω 0,2n 2 and priors are satisfied, τ > γ(1+ 1 )/(1+γ) and exp(M bm ) > 2{(1+γ)/γ} 1/2 , then the Bayes factor B 10 (X n 1 , Y n 2 ) is consistent under P 0 . Moreover, under H 0 : and under H 1 : k 1 = k 2 , where k min = k 01 ∧ k 02 .
As mentioned earlier, to the best of our knowledge, this is the first consistent two-sample bandwidth test result in high-dimensional settings. Frequentist testing procedures in An et al.
(2014) and Cheng et al. (2017) focused only on the one-sample bandwidth test, and it is unclear whether these methods can be extended to the two-sample testing problem.
Note that the hypothesis testing problem H 0 : k 1 = k 2 versus H 1 : k 1 = k 2 is different from the hypothesis testing H 0 : Ω 1n 1 = Ω 2n 2 versus H 1 : Ω 1n 1 = Ω 2n 2 in Cai et al. (2013). The latter testing problem is called the two-sample precision (or covariance) test. The two-sample bandwidth test is weaker than the two-sample precision test, i.e. if the two-sample bandwidth test supports the null hypothesis, then one can further conduct the two-sample precision test.

Numerical Results
We have proved the bandwidth selection consistency and convergence rates of Bayes factors based on priors (4)-(6). In this section, we conduct simulation studies to describe the practical performance of the proposed method. Throughout the section, we use the prior π(k) = 1/(R n + 1).

Choice of Hyperparameter
In this subsection, we conduct a simulation study in which we also discuss the appropriate choice of hyperparameters. Priors (4)-(6) have three hyperparameters: γ, τ and R n . To achieve the selection consistency, the sufficient condition for the hyperparameters is γ(1 + 1 )(1 + γ) −1 < τ ≤ 0.4 and k 0 ≤ R n ≤ nτ 1 (1 + 1 ) −1 for all large n. Throughout the simulation studies in this section, 1 is set at 0.1 5 . For a given γ > 0, we fix τ = γ (1 + 2 1 )(1 + γ) −1 and R n = k 0 + 10, which satisfies the sufficient condition. It allows us to focus on the sensitivity of the choice for γ. We say the hyperparameter γ is oracle if k γ = k 0 , where and π γ (k | X n ) is the marginal posterior for k with the hyperparameter γ. In the above, the value 10 corresponds to 'very strong evidence' supporting criterion suggested by Kass and Raftery (1995). We are interested in the oracle value of γ, which gives the true bandwidth value for a given finite sample scenario, so we construct the oracle ranges for γ under various settings.
Another goal is to verify what factors influence the performance of the proposed bandwidth selection procedure.
For each j = k 0 + 1, . . . , p, nonzero elements in the jth row of the true Cholesky factor A 0n were sampled from U nif (A 0,min , A 0,max ) and ordered to satisfy a 0,jl ≤ a 0,jl for any l < l .
The diagonal elements of D 0n were generated from U nif (D 0,min , D 0,max ). We considered the following settings (i)-(iv) for each k 0 value. For each setting, we generated 100 data sets and calculated the averaged oracle ranges for γ.
Setting (   Tables 1 and 2 represent the simulation results for settings (i)-(iv) when k 0 = 5 and k 0 = 10, respectively. The oracle range seems to be affected by the magnitudes of nonzero elements of A 0n , D 0n and the true bandwidth k 0 . Among these factors, the magnitude of nonzero a 0,jl is more crucial than others based on the simulation results. The smaller signals are, the larger γ is needed for the accurate estimation of k, which is reasonable because the condition exp(M bm ) > 2{(1 + γ)/γ} 1/2 is required for the selection consistency. The magnitude of d 0j slightly affects the oracle range for γ: the oracle range of γ becomes narrower as d 0j 's become larger. Note that d 0j is the residual variance when we interpret the model as a sequence of autoregressive models (3), so intuitively the larger value of d 0j makes the estimation problem harder. Finally, we found that the larger the true bandwidth k 0 is, the larger γ is required for the accurate estimation of k. This phenomenon can be explained by the form of the marginal posterior π γ (k | X n ) in (7): the penalty term p j=2 (1 + 1/γ) −k j /2 in π γ (k | X n ) gets stronger (smaller) as γ gets smaller. When k 0 is large, γ should not be too small to allow the posterior to put sufficient mass for large values of k in the finite sample situation.
The effect of the above factors decreases as the sample size n grows larger, which supports the theoretical properties in Section 3. In our simulation studies, γ = 0.2 consistently detected the true bandwidth k 0 for all the settings. Thus, we used γ = 0.2 for the subsequent simulation study in the next subsection.

Comparison with other Bandwidth Tests
In this subsection, we compared the performance of our method with those of other bandwidth selection procedures. We used the hyperparameters γ = 0.2, τ = γ (1 + 2 1 )/(1 + γ) and R n = k 0 + 10. We chose the bandwidth test of An et al. (2014) as a frequentist competitor and the bandwidth selection procedures of Banerjee and Ghosal (2014) and Lee and Lee (2017) as Bayesian competitors. An et al. (2014) proposed bandwidth selection procedures, algorithms 1 and 2. BA1 and BA2 in Table 3 represent the algorithms 1 and 2, respectively. Note that the above Bayesian procedures do not guarantee the bandwidth selection consistency. For Banerjee and Ghosal (2014) and Lee and Lee (2017), we used the prior π(k) ∝ exp(−k 4 ) as they suggested.
True precision matrices were generated by the same procedure described in Section 4.1, where [A 0,min , A 0,max ] = [0.1, 0.2] and [D 0,min , D 0,max ] = [2, 5]. For a comprehensive comparison, we generated 100 data sets from various settings: Table 3: The summary statistics, p 0 and k 0 , for each setting are represented. BBS: the proposed method in this paper. LL: bandwidth selection procedure of Lee and Lee (2017). BG: bandwidth selection procedure of Banerjee and Ghosal (2014). BA1 and BA2: algorithms 1 and 2 in An et al. (2014), respectively, whose significance levels α are set at 0.005.
We denoted the proposed method in this paper as BBS, the Bayesian Bandwidth Selector. As expected, our method outperforms the other Bayesian procedures of Lee and Lee (2017) and Banerjee and Ghosal (2014). Although Lee and Lee (2017) is slightly better than Banerjee and Ghosal (2014), both of them consistently underestimate the true bandwidth k 0 . The bandwidth selection procedures of An et al. (2014) are comparable to our method for all the settings. Note that, however, our method has the advantage that it can be used directly for the estimation problem beyond the bandwidth selection.

Telephone Call Center Data
We illustrate the performance of the proposed method using the telephone call center data previously analyzed by Huang et al. (2006), Bickel and Levina (2008) and An et al. (2014).
The phone calls were recorded from 7:00 am until midnight from a call center of a major U.S. financial organization. The data were collected for 239 days in 2002 except holidays, weekends and days when the recording system did not work properly. The number of calls were counted for every 10 minutes, and a total of 102 intervals were obtained on each day. We denote the number of calls on the jth time interval of the ith day as N ij for each i = 1, . . . , 239 and j = 1, . . . , 102. As in Huang et al. (2006), Bickel and Levina (2008) and An et al. (2014), a transformation X ij = N ij + 1/4 was applied to make the data close to the random sample from normal distribution. The transformed data set was centered. For more details about the data set, see Huang et al. (2006).
We are interested in predicting the number of phone calls from the 52nd to 102nd time intervals using the previous counts on each day. The best linear predictor of X ij from X j i = (X i1 , . . . , X i,j−1 ) T , X ij = µ j + Σ (j,1:(j−1)) Σ (1:(j−1),1:(j−1)) −1 (X j i − µ j ), was used to predict X ij for each j = 52, . . . , 102, where µ j = E(X 1j ), µ j = (µ 1 , . . . , µ j−1 ) T and Σ S 1 ,S 2 is a sub-matrix of Σ consisting of the S 1 th rows and the S 2 th columns for given index sets S 1 and S 2 . We used the first 205 days (i = 1, . . . , 205) as a training set and the last 34 days (i = 206, . . . , 239) as a test set. To calculate the best linear predictor (18), the unknown parameters are need to be estimated. Because it is reasonable to assume the existence of the natural (time) ordering, we plugged the estimators (18), where A nk and D nk are estimators based on the training set. We applied the proposed methods in this paper, An et al. (2014) and Bickel and Levina (2008) to estimate the bandwidth k using the training set, and compared the prediction errors i=206 | X ij − X ij |/34 for each j = 52, . . . , 102. For a fair comparison, we used the same estimator Σ k and only chose different bandwidths depending on the selection procedure.
The bandwidth selection procedure described in Section 4.1 with γ = 0.2 gives the estimated bandwidth, k = 6. On the other hand, Bickel and Levina (2008) selected the bandwidth as 19 based on a resampling scheme proposed in their paper. Algorithms 1 and 2 with α = 0.01 in  Figure 1 represents the averages of prediction errors for various bandwidth values k. The minimum error is attained at k = 4. None of the above methods achieves the optimal bandwidth k = 4, but the bandwidth obtained from our method is closest to 4.

Discussion
In this paper we introduced a prior distribution for high-dimensional banded precision matrices with primary interests on Bayesian bandwidth selection and tests for one-and two-sample problems. The induced posterior distribution attains the strong model selection consistency under mild conditions. We also proved the consistency of Bayes factors for one-and two-sample bandwidth tests. The proposed bandwidth selection procedure outperforms other Bayesian procedures of Banerjee and Ghosal (2014) and Lee and Lee (2017), and comparable to frequentist test of An et al. (2014).
Throughout the paper, we assumed that each row of the Cholesky factor has the same bandwidth for simplicity. It can be extended to more general setting allowing different bandwidth for each row. If we denote the bandwidth for the jth row as k (j) 0 and k 0,max = max 1≤j≤p k (j) 0 , then one can conduct the bandwidth test for k 0,max . Theoretical results in this paper also hold for the maximum bandwidth k 0,max selection problem with possibly some additional conditions. For example, if k (j) 0 = k 0,max except only finite j's, then the proposed priors still achieve the theoretical properties in Section 3.
The bandwidth selection problem for bandable matrices is one of the interesting future research topics. Note that it has very different characteristics from that for banded matrices. In the bandable case, the bandwidth selection is to find the optimal bandwidth minimizing the estimation error with respect to some loss function. It is well known that the optimal bandwidth depends on the loss function . Thus, if the bandwidth selection of the bandable matrix is of primary interest, the prior distribution should be chosen carefully depending on the loss function.

Acknowledgement
We thank Baiguo An for providing us the telephone call center data.

Appendix 1: Posterior convergence rate for the Cholesky factor
The estimation of Cholesky factor is important to detect the dependence structure of data.
In this section, we show that the proposed prior can be used to estimate Cholesky factors.
Theorem 3.1 implies that k = argmax 0≤k≤Rn π(k | X n ) is a consistent estimator of k 0 . Consider an empirical Bayes approach by considering priors (4) and (5) with k instead of imposing a prior on k. This empirical Bayes method enables us an easy implementation when the estimation of Cholesky factor or precision matrix is of interest. To assess the performance, we adopt the P-loss convergence rate used by Castillo (2014) and Lee and Lee (2018). Corollary .1 presents the P-loss convergence rate of the empirical Bayes approach with respect to the Cholesky factor under the matrix ∞ -norm. We denote π (k) as the empirical prior stated above and E π (k) (· | X n ) as the posterior expectation induced the prior π (k) .
Corollary .1 Consider model (1) and priors (4) and (5) with k instead of k. If conditions (A1)-(A4) are satisfied and k 0 + log p = o(n), then we have Define a class of precision matrices where C p is the class of p × p symmetric positive definite matrices. With a slight modification of Example 13.12 in an unpublished lecture note of John Duchi (Duchi;2016), the minimax lower bound is given by where the second infimum is taken over all estimators with bandwidth k. Thus, the above empirical Bayes approach achieves nearly optimal P-loss convergence rate.
It is easy to check that where Q jk = d and Q jk easily, for a given constant = (τ /10) 2 , we define the following sets and N c j,k = N c j ∩ N c 1,j ∩ N c 2,j,k , where λ jk = (I n −P jk )X j(k 0 ) a (k 0 ) 0j 2 2 /d 0j . First, we will show that the above sets have probabilities tending to 1 as n → ∞. Note that n d , where χ 2 m (λ) denotes the noncentral chi-square distribution with degrees of freedom m and the noncentrality parameter λ and χ 2 m = χ 2 m (0). By Corollary 5.35 in Eldar and Kutyniok (2012), P 0 (N j ) ≤ 4 exp(−n 2 /2) for all sufficiently large n. From the concentration inequality of chi-square random variable (Lemma 1 in Laurent and Massart (2000)), it is easy to see that P 0 (N 1,j ) ≤ 2 exp(− (n − k 0 )) for all sufficiently large n. Finally, by Lemma 4 in Shin et al. (2015), we have which is of order o(1) provided that ζ 0n / 0n = o(n). The last inequality holds because on N c j for all sufficiently large n, where e j is the unit vector whose jth element is 1 and the others are zero. Note that where˜ j ∼ N n (0, d 0j I n ) and V jk / d 0j ∼ N (0, d 0j λ jk ) under P 0 given X j(k 0 ) . Then, From the moment generating function of the normal distribution, we have j,k by condition (A2), the definition of and τ ≤ 0.4, where C bm = 10τ −1 (1 − τ ) −1 M bm . It implies that (22) is bounded above by which is of order o(1) provided that (10).
Let N j,k be the set defined in the proof of Theorem 3.1, then the last term is bounded above by Thus, it completes the proof.
Let k min = k 01 ∧ k 02 , then B 10 (X n 1 , Y n 2 ) −1 = O p R n k min R n − k min T n,H 1 ,k min ,k min −1 + R n (R n − k min ) k min T n,H 0 ,k min ,k min under H 1 .
Proof of Corollary .1 By the proof of Theorem 3.1, we have where C > 0 is a constant depending on γ, τ and M bm . By the Markov's inequality, where k = argmax k π(k | X n ).
Note that The first term in (24) is of order (k 0 (k 0 + log p)/n) 1/2 by Lemmas 2 and 4 in Lee and Lee (2017).