Restricted Type II Maximum Likelihood Priors on Regression Coeﬃcients

. In Bayesian hypothesis testing and model selection, prior distributions must be chosen carefully. For example, setting arbitrarily large prior scales for location parameters, which is common practice in estimation problems, can lead to undesirable behavior in testing (see Lindley’s paradox; Lindley (1957)). We study the properties of some restricted type II maximum likelihood (type II ML) priors on regression coeﬃcients. In type II ML, hyperparameters are “estimated” by maximizing the marginal likelihood of a model. In this article, we deﬁne priors by estimating their variances or covariance matrices, adding restrictions which ensure that the resulting priors are at least as vague as conventional proper priors for model uncertainty. We ﬁnd that these type II ML priors typically yield results that are close to answers obtained with the Bayesian Information Criterion (BIC; Schwarz (1978)).


Introduction
In this article, we investigate the properties of restricted type II maximum likelihood (type II ML) priors on regression coefficients under model uncertainty. Along the way, we establish connections with the Bayesian Information Criterion (BIC; Schwarz (1978)) and proper priors. Operationally, parametric type II ML proceeds as follows: (1) start with a parametric model for the data y, specified with a sampling density f (y | θ) and a prior π η (θ) that depends on a hyperparameter η ∈ C and (2) set η by maximizing the marginal likelihood m(y) of the model, that is η = arg max η∈C f (x | θ)π η (θ) dθ = arg max η∈C m(y).
Type II ML was named and extensively studied in Good (1965), and it can be seen as a particular instance of empirical Bayes which, in general, "estimates" the hyperparameter η from the data (although not necessarily by maximizing the marginal likelihood: a popular alternative is the method of moments).
The motivation for this work was to seek a compromise between the use of conventional priors for model uncertainty (e.g., Zellner's g-priors or Zellner-Siow prior;Zellner and Siow (1980); Zellner (1986)) and BIC. Conventional priors are typically centered at the smallest (or null) model and can be quite far from the likelihood function arising from a larger model. Prior distributions centered at null models can be oriented in directions away from the likelihood function, which would seem to unduly favor the null model. Raftery (1995) shows that BIC is a good approximation to the marginal likelihood one obtains when a normal prior that is centered at the maximum likelihood estimate (MLE) θ is used. This is actually a type II ML prior, arising from estimating the prior mean by type II ML. However, it seems like an extreme use of type II ML because it centers the prior completely around the model likelihood function.
The compromise studied herein is to keep the prior centered at the null model, as with current conventional priors, but allow the prior variance or covariance matrix to be estimated by type II ML. We hoped that this would strike a balance between conventional priors and BIC but, while we find that this "variance-oriented" type II ML prior does yield compromise results, the conclusions are typically closer to BIC.
A second surprise was that the "variance-oriented" type II ML prior (and resulting Bayes factors) can be computed in closed form, even for the case of entire unknown prior covariance matrices (this is a computational advantage over, e.g., the Zellner-Siow priors). The importance of making restrictions on the hyperparameters is also highlighted; without them, one can even have inconsistent model selection (for example, if the scale parameter g of a Zellner g-prior (Zellner, 1986) is estimated without restrictions, the resulting procedure is not consistent if the null model is true (Liang et al., 2008)).
Our work is partially motivated by Bayarri et al. (2019), where prior-based versions of BIC (named PBIC and PBIC*) are defined. In particular, PBIC* is a version of BIC which builds upon a restricted type II ML version of the so-called "robust" prior (Berger, 1985). The scales of the prior in PBIC* maximize an approximate marginal likelihood subject to a unit-information restriction. The fact that PBIC* is well-behaved in the examples covered in Bayarri et al. (2019) motivated us to study the properties of restricted type II ML procedures under model uncertainty in greater detail.
The scenarios we consider in this article involve regression coefficients in normal linear models (Section 2), high-dimensional analysis of variance (Section 3), and the nonparametric regression example in Shibata (1983) (Section 4). In this latter section we also highlight how type II ML can be fruitfully used when prior information is available. The article ends with conclusions. All the proofs are relegated to the supplementary material (Peña and Berger, 2019).

Derivation of the type II ML prior
Consider the normal linear model where Y ∈ R n , X 0 ∈ R n×p0 contains common predictors, and X ∈ R n×p contains model-specific predictors. We assume that the predictors are linearly independent and the common and model-specific parameters are orthogonal, so that X 0 X = 0 p0×p (if X 0 = 1 n , this amounts to centering X). In this section, the prior on the common param-eters is the right-Haar prior π(β 0 , σ 2 ) ∝ 1/σ 2 , which is supported by group invariance arguments in Berger et al. (1998) and Bayarri et al. (2012).
The prior distribution we consider for β, given σ 2 , is the N p (β | 0 p , σ 2 W ) normal prior with mean 0 p and positive definite covariance matrix W . For a fixed W and n ≥ p + p 0 , the marginal likelihood is The type II ML approach to determination of W consists in maximizing the marginal likelihood over W , using the result as the prior covariance matrix. An earlier version of this (see George and Foster (2000); Hansen and Yu (2003); Liang et al. (2008)) considered g-priors arising from W of the form W = g σ 2 (X X) −1 , and then maximizing the marginal likelihood over the choice of g.
While this maximization over W can be done in closed form, the result is not satisfactory, in that the result is a singular matrix. We will circumvent this issue by constraining W under the maximization, and will do so through the concept of a "unit-information prior." The expected Fisher information of the regression coefficient β is (X X)/σ 2 , so one can argue that (X X)/(nσ 2 ) contains as much information as a "typical" observation in the sample (Kass and Wasserman, 1995;Raftery, 1995;Hoff, 2009). The N p (0, nσ 2 (X X) −1 ) prior is often referred to as the unit-information (normal) prior, and it is a reasonably vague (but necessarily proper) prior for dealing with model uncertainty. Motivated by this discussion, we study the restricted type II ML prior where A B means that A − B is positive semidefinite. This will ensure that the restricted type II ML covariance will be at least as disperse as the unit-information prior covariance. The lower bound is also an instance of Zellner's g-prior where g = n.
In the context of estimation, DasGupta and Studden (1989), Leamer (1978), and Polasek (1985) study priors that resemble our type II ML prior, bounding the prior covariance matrix both above and below.
Proposition 1 below shows that the covariance matrix that maximizes m W (Y ) subject to W n(X X) −1 is a linear combination of the unrestricted maximum over all positive semidefinite matrices, which is proportional to β β , and the lower bound n(X X) −1 .
Proposition 1. For n > p + p 0 , the solution to the optimization problem maximize m W (y) subject to W n(X X) −1 can be written as In the following subsections, we study the properties of the type II ML prior on β that takes W as its covariance matrix in model selection and uncertainty, estimation, and prediction.

Model uncertainty and selection
Let X i be a design matrix that includes a subset of p i out of the p predictors in X, with i ∈ {1, 2, . . . , 2 p } (p i can be 0, which corresponds to the null model), and let M i be the model Y = X 0 β 0 + X i β i + i , where i ∼ N n (0, σ 2 I n ) and X 0 X i = 0 p0×pi . Throughout, we set prior covariance matrices locally -that is, each M i is assigned its own W i . The local approach to empirical Bayes model selection is justified through informationtheoretical arguments in Hansen and Yu (2003). We perform model selection using null-based Bayes factors, namely We use the notation π ML for the joint (type II ML) prior under The prior under the null model is π 0 (β 0 , σ 2 ) ∝ 1/σ 2 . Combining the result in Proposition 1 with the Sherman-Morrison formula and the matrix determinant lemma (which can be found, for example, as Equations 160 and 24 in Petersen et al. (2008), respectively), it is straightforward to see that the null-based Bayes factor of M i under the type II ML covariance matrix is The first case corresponds to the null-based Bayes factor with the lower bound The type II ML procedure differs from the lower bound only if the signal-to-noise ratio (that is, R 2 i ) is high enough. This feature prevents the procedure from unduly favoring larger models.
Before we study the properties of the prior in more detail, we present an example with p = 2 predictors to introduce some geometric intuition. In addition, the example will help us highlight that the lower bound prior has a particular asymmetry with respect to the sign of the correlation between the predictors. It also serves as motivation to compare π LB and the type II ML prior π ML to the Bayesian Information Criterion (BIC; Schwarz (1978)), which is defined as where β 0 β i , and σ 2 i are the maximum likelihood estimators of β 0 , β i and σ 2 , respectively. Throughout, we treat exp(−BIC/2) as an approximate marginal likelihood with the understanding that the "BIC" of the null model is −2 log N n (Y | X 0 β 0 , σ 2 0 I n ). These choices lead to the null-based Bayes factor Raftery (1995) observed that exp(−BIC/2) is an excellent approximation to the marginal likelihood arising from , which is Zellner's gprior with g = n, but centered at β i instead of 0 pi . Indeed, under such type II ML prior, the null-based Bayes factor is which is almost identical to BF i0,BIC . Another prior we will be considering in numerical comparisons is the Zellner-Siow prior, which is Cauchy pi (0 pi , σ 2 n(X i X i ) −1 ), since this is one of the most commonly recommended model uncertainty priors.

Example 1 (Correlated predictors). Consider a model with 2 standardized (centered and scaled) predictors and an intercept,
Since the predictors are standardized, their (uncorrected) sample correlation is the off-diagonal entry of (X X)/n, which we denote r. The prior covariance between β 1 and β 2 implied by the prior Ghosh and Ghattas, 2015). Therefore, if X 1 and X 2 are positively correlated, the prior covariance between β 1 and β 2 induced by the prior is negative (and conversely for negative correlations).
We set n = 10, β = (5, 5) and consider two cases: r = 0.9 and r = −0.9. In order to isolate the effect of changing the sign of r as much as possible, we use the same random in both cases and the same N 1 (0, 1) random numbers for generating the design matrices before transforming them (deterministically, via principal component scores times the Choleski matrix square-root of the target sample covariance) to correlated predictors with the desired r. Figure 1 shows contours of N 2 (0 2 , n(X X) −1 ) (solid blue) and N 2 (0, W ) (solid green; setting σ 2 = 1), the type II ML prior. It also shows the contours of N 2 ( β, n(X X) −1 ) (dashed red), the "BIC prior"; note that the likelihood function (a function of β) is proportional to N p ( β, (X X) −1 ), so it has the same shape. When r = −0.9, the marginal likelihood of the true model is high with all the priors. If r = 0.9, the highest density regions of the likelihood of the true model are assigned relatively low probability density under N 2 (0 2 , n(X X) −1 ). Table 1 confirms this geometric intuition -for sample sizes ranging from 5 to 15 and after 1000 simulations, the average posterior probability that the lower bound LB (g-prior with g = n) assigns to the true model is lower than with BIC or the type II ML prior (ML). The Zellner-Siow (ZS) prior is less sensitive to the sign of r than the lower bound, despite the fact that they are both centered at 0 p and have the same prior scale.
Our intuition can be supported mathematically. If σ 2 is known, Figure 1: Highest probability density regions (20%, 50%, 95%) of the lower bound (g-prior) N p (0 p , n(X X) −1 ) (solid blue), "BIC prior" N p ( β, n(X X) −1 ) (dashed red), and the type II ML prior N p (0 p , W ) (solid green). The MLE is indicated with a β symbol. The intuition we gathered from Example 1 that the type II ML procedure is between BIC and the lower bound (LB; i.e. a g-prior with g = n) is shown formally below.
be the full model (which includes all p predictors) and M 0 be the null model. If the prior on the model space is the same in all cases, the inequality above implies If the true model is the full model, BIC assigns more probability to the truth than the type II ML prior and the lower bound; on the other hand, if the true model is the null model, the lower bound (g-prior) assigns more probability to the truth than the type II ML prior and BIC. However, there is yet another interesting asymmetry. When the true model is the null model, the differences between the lower bound and BIC tend to be small, whereas if the true model is the full model the differences can be rather large. We can provide some mathematical support to this claim. First, assume that σ 2 is known, so that the expression for log . when the null model is true). Also note that E[SSR i ] is increasing in p i , which implies that the expected (log) differences between the lower bound and BIC grow as the number of predictors grows. For unknown σ 2 , log(BF i0,BIC /BF i0,LB ) is increasing in R 2 i , which is consistent with our argument.
At the beginning of this section, we mentioned that a type II ML prior that has been previously studied is the g-prior N p (β | 0 p , g σ 2 (X X) −1 ), where g is set locally by maximizing the marginal likelihood subject to g ≥ 0 (George and Foster, 2000;Hansen and Yu, 2003;Liang et al., 2008). This prior has undesirable features that are a byproduct of not maximizing the marginal likelihood subject to a lower bound on g that is bounded away from 0. One of them is that the resulting null-based Bayes factors are always greater or equal to 1 (which leads to inconsistency if the null model is true), and another one is that the Bayes factor between any two models can be equal to 1 with positive probability in cases where n > p + p 0 (especially when n ≈ p + p 0 ), which cannot occur (with positive probability) with proper priors or our restricted type II prior.
We close this subsection by studying whether the type II ML prior satisfies the desiderata in Bayarri et al. (2012) for objective priors in model selection.

Basic criterion:
The basic criterion is satisfied if the prior is proper, which the type II ML prior satisfies directly because of the restriction.
2. Model selection consistency: Let the true model be M * : N n (Y | X 0 β 0 + X * β * , σ 2 I n ). Then, model selection consistency is satisfied if P(M * | Y ) converges to 1 in probability. The type II ML prior is model-selection consistent under the following regularity condition, which is commonly made in the literature (Fernandez et al., 2001;Liang et al., 2008;Guo and Speckman, 2009;Maruyama and George, 2011;Bayarri et al., 2012;Som et al., 2016). For any model M j that doesn't nest the true model, assume that The assumption can be interpreted as that the models have design matrices that can be differentiated in the limit (Bayarri et al., 2012).

Restricted type II ML on regression coefficients
3. Information consistency: Suppose that, for a fixed n, β i → ∞, which implies R 2 i → 1. This is a situation where there is overwhelming evidence in favor of M i (Liang et al., 2008). Information consistency holds if BF i0 → ∞, which is satisfied by the type II ML prior.

Intrinsic consistency:
A prior satisfies intrinsic consistency if, as n grows, it converges to a proper prior which does not depend on model-specific parameters or n. In general, this criterion isn't satisfied for the type II ML prior. To see this, assume that (X i X i )/n → Ξ i for a positive definite matrix Ξ i , which holds if there is a fixed design or the covariates are drawn independently from a distribution with finite second moments (Bayarri et al., 2012). Then, the prior covariance W * for the true model has the limiting behavior (in probability) which depends on β * and σ 2 * .
5. Null and dimensional predictive matching: In both cases, the notion of minimal training sample size is central to the definition. For any model M i , the minimal training sample size is the smallest sample size n * i such that the marginal likelihood of the model is finite. Null predictive matching is achieved if, for any model M i , we have BF i0 = 1 when the sample size is equal to the minimal training sample size n * i . Dimensional predictive matching is achieved if, for any pair models of the same dimension M i and M j , we have BF ij = 1 whenever n * i = n * j . The type II ML prior isn't null or dimensional predictive matching. For p > 1, the minimal training sample size for the type II ML prior is n = p + p 0 + 1. [If p = 1, the marginal likelihood doesn't depend on the choice of W .] When n = p + p 0 , the marginal likelihood is finite for any given W , but one can choose W n(X X) −1 so that the marginal goes to ∞ (this is shown in the supplementary material). Null predictive matching isn't satisfied: in fact, BF i0 goes to ∞ as R 2 i → 1 when n = p + p 0 + 1. Similarly, it is easy to see that dimensional predictive matching isn't satisfied, either; different models will have different R 2 i , yielding Bayes factors that are different than 1.
6. Invariance: The type II ML prior is invariant with respect to linear transformations of the design matrix (e.g. changes of measurement units). More explicitly, let A be an invertible p × p matrix andX = XA. Let β andβ be the regression coefficients of the linear model if the design matrices are X andX, respectively. If the type II ML prior is put on β andβ, then β and Aβ are equal in distribution. Table 2 compares the properties of the type II ML prior with those of BIC, the lower bound LB (g-prior with g = n), the Zellner-Siow prior (ZS), and the type II ML g-prior where g is set locally by maximizing the marginal likelihood subject to g ≥ 0, which we denote g. Our type II ML prior is model-selection consistent, whereas the g-prior isn't under the null model; however, the g-prior is predictive matching, while our type II ML  prior isn't. According to the definition above, it doesn't make sense to assert that BIC is invariant to linear transformation (since it isn't a prior), but it depends on the data only through R 2 , which is invariant with respect to invertible linear transformations.
It is not a surprise that data-dependent priors lack some of the desirable properties of real priors. One sacrifices some Bayesian features when leaving the pure Bayesian domain.

Estimation and prediction The type II ML posterior mean
For simplicity, we omit model subscripts and assume that the model is Y ∼ N n (X 0 β 0 + Xβ, σ 2 I n ), X 0 X = 0 p0×p . If we put the right-Haar prior π(β 0 , σ 2 ) ∝ 1/σ 2 on the common parameters and the type II ML prior on β | σ 2 , the posterior mean of β is The expression can be derived by applying the Sherman-Morrison formula twice to The properties of an analogous estimator in the normal means problem (for known σ 2 ) are studied in DasGupta and Studden (1989), where it is shown that it is minimax with respect to squared error loss. Proposition 3 shows that E(β | Y ) is also minimax with respect to a (scaled) predictive loss because it belongs to the class of minimax estimators characterized in Strawderman (1973).

Proposition 3. Let p ≥ 3 and n > p
The mean squared error of the posterior mean of the lower bound prior (Zellner's g-prior, where g = n) is increasing in β . On the other hand, the mean squared error of β is constant in β . The estimatorβ is equal to the posterior mean of the lower bound when R 2 is small, and close to β when R 2 is large. Therefore,β avoids "selecting" the lower bound in cases where it has high mean squared error (that is, whenever β and R 2 are large).

A simulation study with correlated predictors
To gain further insight into the differences between the type II ML prior, the lower bound (LB) prior (g-prior with g = n), the Zellner Siow (ZS) prior and BIC, we simulate data from Y = 1 n α + Xβ + , ∼ N n (0 n , σ 2 I n ), where n = 50, α = 2, σ 2 = 1, and β is 8-dimensional with k nonzero elements, for k ∈ {0, 1, 2, . . . , 8}. We consider 2 different types of correlation between the predictors: the orthogonal case X X = I p and an AR(1) structure for ρ = 0.9. For all k, we generate β k ∼ N k (0 k , gI k ). The location of the k zeros in the β vector is drawn at random (according to the uniform distribution). We use g ∈ {5, 25} as in Cui and George (2008) and Liang et al. (2008), representing weak and strong signal-to-noise ratios, and evaluate performance with respect to the predictive squared loss function L(β, δ) = Xβ − Xδ 2 , where δ is an estimator of β. [This is also the loss function that was used in the simulation studies in Cui and George (2008) and Liang et al. (2008)]. The estimators that are considered for the various priors are the posterior means (and β in the case of BIC) of the highest probability model (HPM) and the median probability model (MPM), and the estimate arising from Bayesian model averaging (BMA). We ran 1000 simulations for all scenarios and the results are displayed in Figures 1 and 2 in the supplementary material.
In the orthogonal case, BIC, the type II ML prior, LB (g-prior with g = n) and ZS behave similarly when g = 5. When g = 25, we can observe more differences: LB is progressively worse than the rest as the number of true predictors increases, ZS is slightly better than BIC and the type II ML prior when not all predictors are active, and the difference between ZS and BIC and the type II ML prior narrows as the number of true predictors increases.
The results with the AR(1) correlation structure show bigger discrepancies. As the number of true predictors increases, the loss of the LB is substantially higher than the loss with any other prior, especially when g = 25. When g = 5, both LB and ZS are outperformed by BIC and the type II ML prior. When g = 25, ZS has similar losses as BIC and the type II ML prior when the number of true predictors is between 0 and 6, but is outperformed when the true number of predictors is 7 or 8 (in which case, the true model is the full model).
In the cases where the LB is clearly outperformed, its posterior distribution over the model space is closer to the uniform distribution than the other posteriors, as evidenced in the first panel in Figure 3 in the supplementary material, which shows the average entropy of the posterior distributions over the model space. Additionally, ZS induces a noticeably less entropic (more concentrated) posterior distribution over the model space, especially when few predictors are active. ZS and the LB select HPMs and MPMs with fewer predictors than BIC and the type II ML prior (see second and third panel in Figure 3 in the supplementary material, which show the percentage of times the MPM equals the true model and the average size of the MPM, respectively). When the true model is the full model, an interesting phenomenon occurs: ZS is the prior where the MPM is equal to the true model less often, but the average predictive loss of the prior stays competitive with BIC and ML. Upon further inspection in our simulations, this is due to the fact that when some of the true coefficients are non-zero but rather small, ZS does not include their predictors in its MPM, but that does not worsen the predictive loss by much. The HPM and MPM with BIC and the type II ML prior tend to be the same model, and they coincide with the models selected with the LB in the cases where the signal is low, as expected. On the other hand, when the signal is high, the LB assigns more probability to wrong models than the other approaches, and sometimes the HPM and MPM end up being an egregiously bad model, resulting in a substantially higher average loss. Note that ZS, which also has n(X X) −1 as its prior scale but has thicker tails, does not seem to be nearly as affected by this issue as the LB, especially when the signal is high enough (i.e. g = 25).

High-dimensional analysis of variance
In this section, we revisit the high-dimensional problem that was introduced in Stone (1979) and later studied in Berger et al. (2003). In this example, the number of predictors p grows to infinity.
Let be the log-likelihood function of the full model and μ the maximum likelihood estimate of μ. If BIC is defined as −2 ( μ) + p log n, it is inconsistent under M 2 (Stone, 1979). Berger et al. (2003) show that, if the prior on μ under M 2 is μ | g ∼ N p (0, gI p ) with a mixing density over g (which doesn't depend on n) with support (0, +∞), consistency holds. Alternatively, if g has restricted support (0, T ) for T < ∞, there is a region of inconsistency under M 2 .
In this problem, the prior scale has to be chosen carefully. A naive parallel of the type II ML prior in Section 2 would have as lower bound for the prior covariance n(X X) −1 = (n/r)I p = pI p . However, it is straightforward to show that any normal prior whose scale goes to infinity as p → ∞ is inconsistent under M 2 . Since the effective sample size of μ in this problem is r instead of n (see Berger et al. (2014)), we take g = r and study the properties of a prior whose covariance is r(X X) −1 = I p . In the same vein, BIC can be defined appropriately by taking log r as the penalty instead of log n. The asymptotic behavior of both approaches can be summarized as follows: • Normal prior with I p as prior covariance: Under M 1 , consistency for all r ≥ 1.
• BIC with log r as penalty: Under M 1 , inconsistency if r ∈ {1, 2} and consistency otherwise. Under M 2 , inconsistency if τ 2 ≤ (log r−1)/r and consistency otherwise. The condition is most stringent at r = e 2 , so consistency holds for all r if τ 2 > 1/e 2 .
Under M 2 , the region of inconsistency of BIC is contained in the region of inconsistency of the normal prior; however, BIC can be inconsistent under M 1 .
The type II ML prior yields the Bayes factor Under M 1 , the type II ML Bayes factor is inconsistent for r = 1 and consistent for all r > 1. Under M 2 , it is inconsistent for τ 2 ≤ [log(r + 1) − 1]/r and consistent otherwise.
The type II ML prior acts as a compromise between the normal prior and BIC but, unfortunately, it still has regions of inconsistency which mixtures of normal priors avoid. However, the type II ML Bayes factor is available in closed form, whereas the Bayes factors that stem from using mixtures of normals generally are not.
PBIC and PBIC*, which are prior-based versions of BIC that are defined and studied in Bayarri et al. (2019), are consistent under M 1 for all r ≥ 1, but inconsistent under M 2 for τ 2 < [log 2 + log(r + 1) − 1]/r. That is, under M 2 , the region of inconsistency of PBIC and PBIC* contains the region of inconsistency of the restricted type II ML prior. On the other hand, our type II ML prior is inconsistent under M 1 for r = 1, while PBIC and PBIC* are not. Therefore, in this example, the type II ML prior discussed here is more favorable to M 2 than PBIC and PBIC*.

Incorporating prior information
The constraints we have placed on the type II ML prior have been basic constraints, preventing the prior from becoming too concentrated. It is also possible to use constraints that incorporate available prior information, which can lead to improved inferences. We illustrate this possibility by revisiting the example in Shibata (1983), which was also studied in Barbieri and Berger (2004).
The goal in the Shibata example is to estimate the function f (x) = − log(1 − x), −1 ≤ x ≤ 1 from independent observations y i = f (x i ) + ε i , where the ε i are independent ε i ∼ N 1 (0, σ 2 I n ) and σ 2 is known. The function f can be expressed in an orthogonal series expansion as f ( are the Chebyshev polynomials of the first kind. We approximate f with a finite series expansion, modeling We consider different truncation points j, ranging from 1 to k, so our model space consists of a sequence of nested models for j ∈ {1, 2, 3, . . . , k}, where the design matrices X j have dimension n × j and the columns are given by the Chebyshev polynomials of the first kind evaluated at the knots x i = cos(π(n − i + 1/2)/n), for i ∈ {1, 2, . . . , n}. The true coefficients in an infinite orthogonal expansion are α = log 2 and β j = 2/j. The design matrices are orthogonal with X j X j = (n/2)I j and 1 n X j = 0 j . [See Barbieri and Berger (2004) for a more detailed explanation.] We consider n = 30, k = 29, σ 2 = 1, n = 100, k = 79, σ 2 = 1, and n = 2000, k = 79, σ 2 = 3 and put a uniform prior (i.e., 1/29 or 1/79) on the size of the nested models. We utilize two local type II ML priors based on β ∼ N (0, σ 2 A): • The unit-information constraint A n(X X) −1 .
• In polynomial regression, the true coefficients often decrease at polynomial rate.
With that in mind, we define a type II ML prior whose covariance matrix is diagonal, with diagonal elements decreasing according to some power law. That is, A = diag(d 1 , d 2 , . . . , d k ) with d i = ci −a for i ∈ {1, 2, . . . , k}. The parameters c, a ≥ 0 are found by maximizing the marginal likelihood.
We will compare these three methods on Shibata's example, utilizing squared predictive loss L(f, f ) = 1 −1 (f (x) − f (x)) 2 dx, as in Barbieri and Berger (2004). We also consider the Akaike Information Criterion (AIC; −2 ( β j ) + j) and BIC (−2 ( β j ) + j log n), treating exp(−AIC/2) and exp(−BIC/2) as approximate marginal likelihoods. We compare the predictive loss of Bayesian model averaging (BMA), the median probability model (MPM; Barbieri and Berger (2004)), and the highest probability model (HPM). Note that the AIC and BIC columns for the HPM correspond to use of the actual AIC and BIC criteria, since maximizing the posterior probability is equivalent to minimizing the criterion. The MPM and BMA columns utilize AIC and BIC by converting them to approximate marginal likelihoods and utilizing the relevant Bayesian theory.
The results are summarized in Table 3. BIC and W behave similarly in all cases, as we have seen in previous sections. The informative type II ML priors outperform the others. AIC is somewhat better than BIC, and their Bayesian implementations (MPM and BMA) outperform use of the raw criteria (HPM).
All across the board, BMA outperforms the rest (as expected), followed by the MPM and the HPM; the MPM is the best single predictive model in nested model scenarios, as shown in Barbieri and Berger (2004).

Conclusions
Conceptually, the type II ML priors we studied offer an attractive compromise between conventional priors, which might seem overly concentrated at the null model, and BIC. The importance of constraining the maximization so that the prior does not overly concentrate was highlighted, and the need to carefully choose the constraint in highdimensional situations was discussed.
The surprise of the analysis was that the type II ML prior gives remarkably similar answers to BIC. Indeed, the paper could be viewed as primarily providing a new justification of BIC in normal linear models, suggesting that BIC need not just be viewed as an approximation but as something that corresponds quite closely to an interpretable type II ML procedure (and not just with priors that sit on top of the model likelihoods).
In Example 1 and the simulation study in Section 2.3, we observe that the gprior with g = n, which is the lower bound of our restricted type II ML prior, can severely underperform when the predictors are correlated (especially when most predictors are active). In our numerical comparisons, the Zellner-Siow prior, BIC, and the type II ML procedure yield similar results. Zellner-Siow seems to perform slightly better in most cases, but its performance suffers when most predictors are active. From a theoretical perspective, Zellner-Siow satisfies intrinsic consistency and predictive matching, which are not satisfied by the type II ML prior. However, the type II ML prior yields closed form Bayes factor, whereas the Zellner-Siow prior does not (see Table 2).
Finally, we revisited the nonparametric regression example in Shibata (1983), showing how prior information could be incorporated into the constraints defining type II ML priors, leading to considerably improved performance (when the prior information is correct). This is perhaps the most promising practical venue for type II ML priors: embed available structural information about the prior into the class of priors, and then use type II ML.

Supplementary Material
Supplementary material for "Restricted type II maximum likelihood priors on regression coeficients" (DOI: 10.1214/19-BA1188SUPP; .pdf). The supplementary material contains figures that display the results in the simulation study and proofs of the propositions stated in the main text.