Robustness against conflicting prior information in regression

Including prior information about model parameters is a fundamental step of any Bayesian statistical analysis. It is viewed positively by some as it allows, among others, to quantitatively incorporate expert opinion about model parameters. It is viewed negatively by others because it sets the stage for subjectivity in statistical analysis. Certainly, it creates problems when the inference is skewed due to a conflict with the data collected. According to the theory of conflict resolution (O'Hagan and Pericchi, 2012), a solution to such problems is to diminish the impact of conflicting prior information, yielding inference consistent with the data. This is typically achieved by using heavy-tailed priors. We study both theoretically and numerically the efficacy of such a solution in a regression framework where the prior information about the coefficients takes the form of a product of density functions with known location and scale parameters. We study functions with regularly varying tails (Student distributions), log-regularly-varying tails (as introduced in Desgagn\'e (2015)), and propose functions with slower tail decays that allow to resolve any conflict that can happen under that regression framework, contrarily to the two previous types of functions. The code to reproduce all numerical experiments is available online.


Context
In Bayesian analysis, prior information about the parameters of a regression model is included using prior distributions. Consider a model Y ∼ P η,ψ , with η := x T β being a linear predictor. For this regression model, the parameters are β and ψ, where β := (β 1 , . . . , β p ) T ∈ R p are the regression coefficients, with p a positive integer, and ψ is a vector formed of, e.g., scale or shape parameters; x is a known vector of covariates. This regression framework encompasses linear regression, generalized linear models (GLMs) and generalized additive models (when estimated using a spline representation). In this paper, we study the impact on statistical inference of prior information in conflict with the data collected for different types of prior distributions. Our study rests heavily on the form of the prior distributions which will be seen to be a product form where each regression-coefficient density has known location and scale parameters, justifying the introduction of such a study within a regression framework. We focus on situations where the prior information that is in conflict is about regression coefficients, the latter being typically of main interest which makes them more likely to be assigned informative prior distributions. Additionally, we focus on situations where the prior distributions on the coefficients are used to include prior information about the latter, not to regularize the model (contrarily to in, e.g., Johnstone and Silverman (2004); Park and Casella (2008); Carvalho et al. (2010)), even though the study conducted here may be helpful to develop regularization strategies. Furthermore, we focus on linear regression as a special case of the general regression framework described above. This will allow to state precise theoretical results about the behaviour of the posterior distribution in conflicting situations, depending on the type of prior distributions employed. We will explain why and how the results presented apply in the general regression framework.
From now on, we thus consider that Y = x T β + σε with ε ∼ f , which is equivalent to Y ∼ (1/σ) f (( · − x T β)/σ), where ε is a standardized error term, σ > 0 is a scale parameter and f is a distribution; to simplify, f is also used to denote the probability density function (PDF) associated to the distribution. When all covariates are continuous (i.e. when they all take values in uncountable totally-ordered sets), it is recommended to define the prior distribution of the regression coefficients using a conditional-independence structure (see, e.g., West (1984) and Raftery et al. (1997)): where all g j are strictly positive bounded density functions that are symmetric with respect to 0, and µ j ∈ R and σ/λ j > 0 play the role of location and scale parameters, respectively; µ j and λ j are considered to be known and chosen by the user. In the following, we consider to simplify that all covariates are continuous; the theoretical results hold even when this is not the case, but under more technical assumptions. Note that, to simplify the notation, g j is also used to denote the distribution associated to the density. Determining the outcome of conflicting prior information under a general structure of dependence in between the coefficients requires a multivariate analysis and depends strongly on the structure of dependence. The conditional-independence structure presented above allows to simplify the problem and transform the multivariate analysis into several univariate analyses, in addition to enabling the exploitation of existing conflict-resolution techniques that are based on univariate heavy-tailed distributions (and for which the relevant literature will be presented when describing the techniques below). A general and multivariate analysis to determine the outcome of conflicting prior information is beyond the scope of this manuscript.

Conflicts
In normal linear regression, where f = N(0, 1), conjugate priors are often employed, i.e. g j = N(0, 1) in (1) and σ 2 follows an inverse-gamma distribution. A prior is in conflict with the likelihood when the areas where these function have high densities are significantly different (Figure 1). When both the prior and the likelihood are normal (given σ), an undesirable compromise follows: the posterior concentrates its mass on an area in between those with high prior and likelihood densities. This is a consequence of the slimness of the normal tails: the area where the likelihood function has high density is in the tails of the prior density which have an exponential decay, penalizing extremely for such parameter values, and the same holds if we inverse the role of the prior and likelihood in the previous statement. The areas with high prior and likelihood densities thus become a posteriori less probable than an area in between, representing how a conflict is dealt with by that Bayesian modelling and an ineffective way of resolving a conflict. Indeed, the posterior distribution is not consistent with either of the sources of information. Here we consider that the data model is well specified and that the data can be trusted; the information about the parameters carried by the data is thus favoured to the prior information when they conflict. Therefore, we consider that a conflict is (effectively) resolved when the conflicting prior information is discarded so as to yield a posterior distribution consistent with the data (Figure 2).
We acknowledge that the assumption of a well specified data model and that the data can be trusted is strong, but we make this assumption in order to be able to focus on robustness against conflicting prior information. We can, for instance, allow for some sort of misspecification and a potential presence of extreme/erroneous data on top of conflicting prior information by considering that the data set may contain outliers, and obtain similar theoretical results as those presented in the next sections. This is because we allow for the regression model to have an heavy-tailed error distribution. It is however beyond the scope of this manuscript to analyse the situation of potential presence of outliers and present related results.  Figure 1. Two examples of conflicts where y = 0 + β 2 x 2 + σε, ε ∼ N(0, 1), g 2 = N(0, 1), the sample size is n = 100, the prior on σ is an inverse-gamma with shape and scale parameters of n/2 each, the variables are standardized, and: (a) µ 2 = 0, λ 2 = √ n/2 andβ OLS 2 = 2, (b) µ 2 = 0, λ 2 = 1.5 √ n andβ OLS 2 = 0.5;β OLS 2 is the ordinary-least-squares (OLS) estimate which corresponds to the maximum likelihood estimate in that case; in this figure, the likelihood function is normalized to make it a PDF Plots (a) and (b) in Figures 1 and 2 are meant to represent two distinct conflicting situations: (a) one where the conflict is due to a prior location that is significantly different than that of the likelihood, and (b) one where it is due to a extremely small prior scaling. We analyse both situations theoretically and numerically in the next sections. The theoretical analysis will be conducted under an asymptotic regime. In the first situation, the asymptotic regime corresponds to one where the distance between the red and green areas in Figure 1 (a) increases without bounds, which is mathematically modelled by the red one moving away, i.e. µ j → ±∞; in the second situation, we will consider that λ j → ∞. It will be seen that a prior distribution which does lead to a resolution of conflict in the first situation does not necessarily in the second one. The first situation can be think of as one where a practitioner was wrong about the parameter location, but incorporated a moderate confidence by using a moderate prior scaling. In the second situation, in addition to being wrong about the parameter location (but less severely than in the first situation), the practitioner was also overly confident; this conflicting situation  Figure 1 with the only difference being that in: (a) g 2 = LPTN as defined in Section 2 with ρ = 0.95; (b) g 2 = CTN as defined in Section 2 with ϱ = 0.98 could have been avoided by using a less concentrated prior. The latter is also true in the first situation, but the prior would need to be much less concentrated. That is why in one case we consider that the problematic aspect is the location, whereas we consider that it is the scaling in the other one.
A natural way to achieve effective conflict resolution is to have recourse to heavy-tailed distributions; in situations like those presented in Figure 2, the areas where the likelihood functions have high densities are still in the tails of the prior densities, but more weight is assigned to those tails, thus penalizing less for such extreme situations. This strategy dates back to de Finetti (1961) with a first analysis in Lindley (1968), followed by an introduction of a formal theory in Dawid (1973), Hill (1974) and O'Hagan (1979). For a recent review of Bayesian heavy-tailed models and conflict resolution, see O'Hagan and Pericchi (2012). In the latter paper, it is noted that there exists a gap between the models formally covered by the theory of conflict resolution and models commonly used in practice. The latest developments focus on situations where the conflicting information is carried by outlying data points in location-scale models (Desgagné, 2015) and linear regression (Desgagné and Gagnon, 2019;Gagnon et al., 2020aGagnon et al., , 2021Hamura et al., 2022;Gagnon and Hayashi, 2023). The present paper contributes to the expansion of the theory of conflict resolution by covering conflicting prior information in regression.
We consider that in the ideal situation where it is guaranteed that the priors will not conflict with the data that will be collected, prior information about the regression coefficients is included by setting g j = N(0, 1), which is the favoured choice in practice. Given that we consider that the data model is well specified and that the data can be trusted, the distributions that we alter to achieve effective conflict resolution are thus the g j 's. A desideratum of the resulting heavy-tailed priors is to yield similar inference to the informative light-tailed priors they replace in the absence of conflict. In the following, we study three alternatives to the normal distribution with three different types of tail decays: a first one with regularly-varying tails, a second one with log-regularly-varying tails (Desgagné, 2015), and a third one with constant tails. They are all presented in Section 2 in which an overview of their advantages and disadvantages is also provided. In Section 3, their efficacy are precisely characterized through theoretical results. An extensive simulation study is next provided in Section 4 to show how these theoretical results translate in practice. The manuscript finishes in Section 5 with retrospective comments. All proofs of theoretical results are deferred to Appendix A (supplementary material). Some details of the simulation study are presented in Appendix B (supplementary material).

Heavy-tailed priors
We start in Section 2.1 by presenting the main characteristics of the most commonly employed alternative to the normal distribution in conflict resolution, the Student distribution. Even in the least problematic situation, which is that where the conflict is due to a prior location that is significantly different than that of the likelihood, it will be seen to partially resolve conflicts. We next provide a description of the log-Pareto-tailed normal (LPTN) distribution in Section 2.2, which has the ability to wholly discard the prior information in that situation. This distribution was introduced by Desgagné (2015). Its density exactly matches that of the standard normal on the interval [−τ, τ], where P(−τ ≤ N(0, 1) ≤ τ) = ρ. Outside of this area, the tails of this continuous density are log-regularly varying (Desgagné, 2015), and behave as log-Pareto tails, i.e. (1/|z|)(1/ log |z|) θ , hence its name. The only free parameter of this distribution is ρ: the parameter θ is a function of ρ and τ, the latter being itself a function of ρ. Even with such heavy tails, the LPTN distribution leads to an ineffective conflict resolution when the conflict is due to a small prior scaling. In response to this problem, we introduce in Section 2.3 the constant-tailed normal (CTN) distribution, which, like the LPTN distribution, has a density that matches that of the standard normal on a central interval, but with constant tails.

Student distribution
The Student distribution is without a doubt the favourite heavy-tailed alternative to the standard normal distribution. A reason for this is because its density shares important characteristics with the standard normal one, like a bell shape and symmetry around 0. We show this in Figure 3 (a) for a Student distribution with 4 degrees of freedom, which represents a good compromise between heavy tails and close similarity with the normal distribution. In Figure 3 (b), we show how the ratio (1/c)g j (z/c)/g j (z) behaves as z → ∞ when g j is the PDF of a Student distribution with 4 degrees of freedom to graphically illustrate its regularly-varying property, a property that is discussed in greater detail below. Employing Student prior distributions instead of normal ones for conflict resolution in regression has been explored before; see, e.g., West (1984) and Mutlu et al. (2019). However, the focus of previous papers was different than that of the current one, which is to compare that alternative to normal prior distributions to other alternatives through an extensive theoretical and numerical analysis. In West (1984), for instance, the focus is rather to study the use of the Student distribution in a context of robustness against outliers; the Student is viewed as a member of a specific family of alternatives to normal distributions, that of scale mixtures of normal distributions.
The tails of the Student density are regularly varying, implying that for any fixed λ j , σ, β j , where γ is the degrees of freedom. Examining the limiting behaviour of prior densities is an important step in understanding the limiting behaviour of the posterior distribution in conflicting situations. Indeed, given that the posterior density is the normalized product of the prior densities and the likelihood function, the limit above suggests that a conflicting prior density (due to a significantly different location) behaves in the limiting posterior distribution like (σ/λ j ) γ g j (µ j ) ∝ σ γ . The theoretical results in Section 3 precisely characterize the behaviour of the limiting posterior distribution, depending on the conflicting situation and the priors employed. With a Student prior distribution, conflicting information is partially rejected as a trace remains, σ γ . Ideally, conflicting information is wholly rejected as its source becomes increasingly remote (West, 1984), which translates into a prior density which behaves asymptotically like g j (µ j ) ∝ 1. This explains why we say that the Student distribution only partially resolves conflicts due to significantly different locations. The existence of that trace is a consequence of employing a prior density with insufficiently heavy tails. Indeed, it will be seen in Section 2.2 that the limit of the ratio in (2) when instead setting g j to a LPTN distribution is 1. The trace has an impact on the limiting posterior variability of all coefficients which is seen to be more or less significant depending on the degrees of freedom, the sample size and the number of conflicting prior densities (this is shown explicitly in Section 4). When the sample size is large relatively to the degrees of freedom and the number of conflicting prior densities, as in the numerical experiment of Section 4, the impact is small. The insufficiently heavy tails however make the convergence to the limiting posterior distribution slower, comparatively with other alternatives with heavier tails, implying a slower partial resolution of conflicts. We finish this section by noting that the Student prior density converges to a point mass at µ j when λ j → ∞, which makes it ineffective at resolving conflicts due to extremely small prior scalings.

LPTN distribution
The density of the LPTN distribution is as follows: where z ∈ R, and τ > 1 and θ > 1 are functions of a parameter ρ ∈ (2Φ(1) − 1, 1) ≈ (0.6827, 1) with φ, Φ and Φ −1 being the PDF, cumulative distribution function (CDF) and inverse CDF of a standard normal, respectively. A LPTN density with ρ = 0.95 is presented in Figure 4 (a). This choice of value for ρ yields, like the Student with 4 degrees of freedom in Figure 3 (a), a good compromise between heavy tails and close similarity with the normal distribution. In Figure 4 (b), we show how the ratio (1/c)g j (z/c)/g j (z) behaves as z → ∞ when g j is the PDF of a LPTN distribution with ρ = 0.95 to graphically illustrate its log-regularly-varying property, a property that is discussed in greater detail below.
As was seen in Figure 2 (a), this slightly modified version of the normal distribution with ρ = 0.95 can resolve conflicts very effectively; the likelihood function and posterior density are indeed on top of each other in that figure. The parameter ρ, chosen by the user, represents the mass of the central part that exactly matches the N(0, 1) density. The value 0.95 has been seen to be a good choice for robustness against outliers in linear regression (see Gagnon et al. (2020a) and Gagnon et al. (2021)). We analyse the impact of the value of ρ in the present context in Section 4. An advantage of Student distributions over LPTN distributions is that their densities are smooth. Indeed, we see in Figure 4 (a) that in order to obtain a density that exactly matches that of the standard normal on an interval, while having heavier tails and being continuous, the LPTN density has to decrease quicker than the normal one for a short interval beyond |z| = τ, making the derivative of the LPTN density discontinuous at |z| = τ. The smoothness of a posterior density has an impact on the efficiency of the numerical methods used to approximate integrals with respect to the associated posterior distribution. With densities having discontinuous derivatives, one might wonder if it is even possible to apply numerical methods explicitly exploiting gradients of log posterior densities, like Metropolis-adjusted Langevin algorithms (Roberts and Tweedie, 1996) and Hamiltonian Monte Carlo (HMC, Duane et al. (1987)), which both are Markov-chain Monte Carlo methods. Given that the discontinuity points of the LPTN derivative have null measure, these methods can be applied. HMC has in fact been employed to sample from the resulting posterior distributions to compute estimates and posterior variances in Section 4 and no problems have been encountered.
The main advantage of LPTN distributions over Student distributions is that the limit of the ratio of densities analogous to (2) is equal to 1, as established in the next proposition, showing their ability at effectively resolving conflicts due to significantly different locations.
Proposition 1 (Asymptotic location-scale invariance). If g j = g LPTN , we have that for any fixed λ j , σ, β j , The property of asymptotic location-scale invariance is shared by all log-regularly-varying distributions (LRVDs, Desgagné (2015)). Most members of this family of distributions supported on the real line are distributions which tails were not originally log-Pareto ones (like the standard normal distribution), but their tails were replaced to reach the desired tail decay, i.e. (1/|z|)(1/ log |z|) θ (like the LPTN distribution). The strategy of replacing the tails of a light-tailed prior distribution by heavy tails to attain an asymptotic location-scale invariance can thus be applied even when the light-tailed prior is not a normal. A distribution which is LRVD, but with originally log-Pareto tails, is the log transformation of a Pareto distribution. This distribution is not often employed because its density has a spike at zero, and is thus less appealing than smooth bell curves like normal and Student densities.
Another advantage of LPTN distributions over Student distributions is that its density is even more similar to the normal one, yielding more similar inferences in the absence of conflict, as will be seen in Section 4. Although there are advantages in using LPTN prior distributions, there are disadvantages; one has been mentioned above, but the main disadvantage is that they do not allow, like all LRVDs, to resolve conflicts due to large λ j . Indeed, for any fixed β j , µ j ∈ R and σ > 0, with β j µ j , if g j = g LPTN and λ j is large enough, which is asymptotically equivalent as λ j → ∞ to Analogously to Student priors in the previous section (but not for the same type of conflict), conflicting information is partially rejected as a trace remains, |β j − µ j | −1 . The latter includes information about the location significantly differently than the normal distribution does. Additionally, the rightmost term in (3) converges to 1 slowly (because the speed at which [log |β j − µ j |/σ]/ log λ j vanishes is dictated by that at which log λ j goes to infinity). This implies that the conflicting information is slowly (in addition to partially) rejected, which will be observed empirically in Section 4. For all these reasons, we consider that LPTN prior distributions are ineffective at resolving conflicts due to small prior scalings, motivating the introduction of different heavy-tailed alternatives to normal prior distributions.

CTN distribution
The distribution that is introduced to resolve a conflict due to either a prior location significantly different than that of the likelihood or small prior scalings is the CTN distribution. Its density is as follows: where z ∈ R and κ is a function of the sole free parameter of the CTN distribution, ϱ ∈ (0, 1), with an analogous definition to τ in the previous section: A CTN density with ϱ = 0.95 is presented in Figure 5 (a). We observe in Figure 5 (a) that even though the CTN density with ϱ = 0.95 matches the standard normal one on the same interval as the LPTN with ρ = 0.95 (Figure 4 (a)), its level of similarity with the standard normal density is much lower. Increasing the value of ϱ from 0.95 to, for instance, 0.98 alleviates this issue, as seen in Figure 5 (b), at the price of a slower conflict resolution (but not significantly slower as will be seen in Section 4). The effectiveness of CTN priors with ϱ = 0.98 at resolving conflicts due to small prior scalings was shown in Figure 2 (b). The main disadvantage of CTN prior distributions is that they are improper; they thus cannot be used when there are more parameters than observations. Another disadvantage is that the derivative of their densities is discontinuous, like that of LPTN densities. The discontinuity points of the CTN derivative have null measure, like those of the LPTN derivative, implying that samplers like HMC can be employed. The estimates and posterior variances needed for Section 4 have been computed using HMC, and no problems have been encountered, like with the posterior distributions resulting from LPTN priors.
The main advantage of CTN prior distributions is their limiting behaviour: for any fixed β j and σ whenever: i) µ j → ±∞ and λ j is fixed, or ii) λ j → ∞ and µ j is fixed but different than β j . The conflict resolution is not perfect as the term 1/σ does not disappear in the limiting posterior density. This term comes from the form of the prior (with a scale parameter given by σ/λ j ), and is not a consequence of insufficiently heavy tails as was the case for Student prior distributions for conflicts due to significantly different locations and LPTN prior distributions for conflicts due to small prior scalings. It should be seen as a flaw of the method. The tails of CTN densities are indeed sufficiently heavy, and allow to resolve any conflict that can happen under our regression framework. A consequence of their sufficiently heavy tails is that they yield a fast convergence towards the limiting posterior distribution, as will be seen in Section 4. As mentioned for Student prior distributions which yield a similar trace (recall (2)), the remaining term 1/σ for CTN prior distributions has an impact on the limiting posterior variability of all coefficients which is seen to be more or less significant depending on the sample size and the number of conflicting prior densities. However, contrarily to Student prior distributions, the impact does not increase with the level of similarity between CTN prior distributions and normal ones; recall that the trace left asymptotically by a conflicting Student prior distribution (due to significantly different locations) is σ γ and that the level of similarity with a normal prior is controlled through γ. The level of similarity between CTN prior distributions and normal ones is controlled through κ, and its value does not have an impact on the trace left asymptotically by a conflicting CTN prior distribution; the trace is 1/σ regardless of the value of κ.
Note that, in an ideal setting where one knows how many conflicting prior distributions there are, one can multiply the prior of σ (that would ideally be used in a situation where there is no conflict) by σ with a power corresponding to the number of conflicting priors to perfectly resolve the conflict. In practice, one may have prior beliefs about that number, but cannot be sure about it. Consequently, we recommend to not alter the prior of σ as it can cause more harm than good.

Theoretical results
In this section, we present three theoretical results. For the presentation of these results, it is required to introduce a proper mathematical framework and details about the model. This is done in Section 3.1 and the results follow. In Section 3.2, we consider an ideal situation where one has access to full information about the conflict, namely, which prior distributions are in conflict and why. This situation is unrealistic but it allows to show what is an ideal conflict resolution in a regression framework. Next, in Section 3.3, we present a result in a situation where one has access to partial information, namely, that there is no conflict due to small prior scalings. This is a more realistic scenario that can be think of as one where a practitioner include information about the regression coefficients, but the practitioner is cautious while doing it, in the sense that the practitioner use moderate to large prior scalings. In practice (as we saw in Figure 1 (a)), there is no certainty that there will be no conflict due to a prior location significantly different than that of the likelihood, even when using moderate to large prior scalings, and we consider that the practitioner wants to be protected against this risk. The last situation is the most common one where a practitioner wants to include information about the regression coefficients and thus set values for all µ j and λ j . While having no reason to believe a priori that a conflict will occur (and thus while having no information regarding a potential conflict), the practitioner wants to be protected. A result in that situation is presented in Section 3.4.
Throughout the current section, we aim to characterize with theoretical results how conflicts are dealt with asymptotically when using heavy-tailed priors. Theoretical results like those in Bunke and Milhaud (1998) allow to study the limiting behaviour of posterior distributions resulting from heavytailed priors under another asymptotic regime than that study here, namely the large-sample regime n → ∞. Even if some heavy-tailed priors presented in Section 2 are non-smooth, it can be proved that they yield posterior distributions that concentrate around the correct parameter values as n → ∞ and posterior estimates that are consistent and asymptotically normal, provided that the priors are nonconflicting (µ j and λ j are all held fixed) and the data model is regular enough (which is the case, for instance, for linear regression and GLMs). This means that if there is no conflicting prior information, whether heavy-tailed priors are used or not does not have an impact asymptotically as n → ∞ on the posterior distributions and estimates.

Mathematical framework
We first precisely describe the asymptotic regime under which the theoretical results in the next subsections are stated. We assume that possibly some µ j → ±∞ and/or some λ s → ∞ (with s different than j). To analyse separately the effect of misspecified locations and scalings and to simplify the analysis, we indeed consider that when µ j → ±∞, λ j is fixed, and when λ s → ∞, µ s is fixed. We more precisely consider that for all j, • λ j = c j + d j ω with c j > 0 and d j ≥ 0, under the constraint that b j 0 for conflicting locations, but 0 otherwise, and d j > 0 for conflicting scalings, but 0 otherwise, with b j = 0 if d j > 0 and d j = 0 if b j 0, and we let ω → ∞. This framework allows, for instance, for conflicting scalings to decrease (because λ j → ∞) at different speeds, meaning that it represents situations where there may be several conflicting scalings, but their values, while being extreme, are not the same.
We now present the model assumptions and introduce required notation. Consider that we observed n data points from a dependent variable, denoted by y 1 , . . . , y n ∈ R, where n is a positive integer. Consider also that we have access to n vectors of p ∈ {2, 3, . . .} covariates, denoted by . . = x n1 = 1 to introduce an intercept in the model. As typically done in linear regression, we treat these vectors as known constants, i.e. not as realizations of random variables, contrarily to y 1 , . . . , y n . The posterior distribution is thus conditional on the latter only.
In linear regression, the random variables Y i are modelled as where ε 1 , . . . , ε n ∈ R are random standardized errors. We assume that the n + 2 random variables ε 1 , . . . , ε n , β and σ are independent, implying that where " D = " denotes an equality in distribution. This latter assumption is common. The resulting posterior density is given by where y := (y 1 , . . . , y n ) T , π ω ( · , · ) is the prior density and A dependence on ω (implying a potential presence of conflict) is highlighted using a subscript. The definition of the posterior distribution in (5) only makes sense when the density is integrable, and thus the marginal density m ω (y), playing the role of a normalizing constant in this case, is finite. We provide in the next subsections sufficient conditions ensuring that this is the case for all ω and for the limiting posterior density. The limiting posterior distribution is denoted by π( · , · | y) and its normalizing contant is m. Their expressions depend on the situations presented in the next subsections. We now present regularity conditions on f . We assume that: • f is a strictly positive continuous PDF that is symmetric with respect to 0; • all parameters of f , if any, are known; • there exists a threshold above which the function ξ defined by z → z f (z) is monotonic; • there exists a positive constant M such that f /g LPTN ≤ M.
Examples of PDFs satisfying these conditions include those of normal, Laplace, Student (with prespecified degrees of freedom) and LPTN (with pre-specified ρ) distributions. The last assumption above on f is about the tail decay of f ; it must be at most as slow as that of g LPTN . This implies that our results are also valid when heavy-tailed error distributions are used for robustness against outliers. The assumptions on π ω ( · | σ) have been presented in Section 1.1. Denote by π( · ) the prior of σ that would ideally be used in a situation where there is no conflict. The assumptions on this density depend on the situations presented in the next subsections and will thus be stated in these subsections.
We finish this section by defining the index set of conflicting priors: C := { j : b j 0 or d j > 0}. The index set of non-conflicting priors is thus given by: C c . We also define two subsets of C: C b := { j : b j 0} and C d := { j : d j > 0}, which are such that C b C d = C and C b C d = ∅.

Full information
Consider that we have set values for all µ j and λ j , and that we are provided with the set C. We use the latter to set all g j accordingly. More precisely, for all j ∈ C b , we set g j = g LPTN , and for all j ∈ C d , we set g j = g CTN . We consider that non-conflicting priors, with j ∈ C c , are set to proper distributions with densities having tails not more heavy than those of LPTN densities. Given that we are provided with the set C and we set some priors to CTN distributions (if C d ∅), we adjust the prior on σ to get rid of the trace left asymptotically by CTN distributions, i.e. the resulting prior density is proportional to π(σ) multiplied by σ |C d | . We assume that π(σ) is bounded above by a constant or a constant times 1/σ, for all σ > 0, which allows for most proper prior distributions and improper prior densities proportional to 1/σ or 1.
Theorem 1. Assume that for all j ∈ C b , g j = g LPTN , for all j ∈ C d , g j = g CTN , and for all j ∈ C c , the positive constant M can be chosen such that g j /g LPTN ≤ M. Assume that the prior on σ has a density that is proportional to σ |C d | π(σ), for all σ > 0. Assume that the constant M can be chosen such that π(σ) ≤ max(M, σ −1 M). Assume that n + |C c | ≥ 2p − 1 + |C b |. Under the framework described in Section 3.1 (recall in particular the form of the prior distribution (1), the definition of the posterior distribution (5), and that µ j = a j + b j ω and λ j = c j + d j ω), and as ω → ∞, the posterior distribution converges: π ω ( · , · | y) → π( · , · | y), with The result of Theorem 1 essentially follows from a characterization of the asymptotic behaviour of the marginal distribution: with m ω (y)/[ j∈C b g j (µ j ) j∈C d λ j g j (κ)] < ∞ and m(y) < ∞ (implying that the posterior distributions are proper); recall that κ = Φ −1 ((1 + ϱ)/2), where ϱ is the parameter of the CTN distribution. From the characterization in (7) we can, indeed, prove that the posterior density converges pointwise, which in turn allows to prove the convergence of the posterior distribution using Scheffé's theorem (see Scheffé (1947)).
To prove (7), we exploit the proof of Theorem 2.1 in Gagnon et al. (2020a). That paper is about robustness to outliers in linear regression. Theorem 2.1 in Gagnon et al. (2020a) characterizes the limiting behaviour of the posterior distribution as some y i → ±∞. The prior on all parameters is assumed to be non-conflicting with a joint prior density bounded above by max(M, σ −1 M). To exploit the proof of that result in Gagnon et al. (2020a), we write m ω (y) as an integral that is seen to converge to 1 if we are allowed to interchange the limit ω → ∞ and the integral. We verify that we are allowed to do this by using Lebesgue's dominated convergence theorem. The problem then becomes to prove that the integrand is bounded by an integrable function of β and σ that does not depend on ω. This is the main difficulty. We show that it is sufficient to bound above π(β, σ | y) by an integrable function of β and σ that does not depend on ω.
To fit within the framework of Gagnon et al. (2020a), it suffices to treat λ j µ j for j ∈ C c C b as an observation from the dependent variable and λ j that multiplies β j for j in the same set as a vector of covariates where the other covariates are all equal to 0. Then we realize that a technical and lengthy part of the proof of Theorem 2.1 in Gagnon et al. (2020a) is devoted to a proof that a function of which (8) is a special case is bounded by an integrable function of β and σ that does not depend on ω. The main challenge is that the terms g j (µ j ) in the denominator of the product in (8) goes to 0 as ω → ∞; the strategy is thus to find a way to get rid of these terms by finding an upper bound for any (β, σ).
When β j is far from µ j , we can use Proposition 1 to bound π j,ω (β j | σ)/g j (µ j ) in (8). But this does not work when β j is not far from µ j . In this case, we have to use a density (1/σ) f ((y i − x T i β)/σ) in π(β, σ | y) which is presumably close to 0 when β j is not far from µ j → ±∞, and bound above (1/σ) f ((y i − x T i β)/σ)/g j (µ j ). Note that we can also use a prior density with j ∈ C c . The proof is based on a decomposition of the parameter space into disjoint sets; for each of these sets, we are able to identify in which case we precisely are. In the case where β j is not far from µ j , it is shown that the associated hyperplanes pass close to at most p − 1 non-conflicting sources (data points in the case of Gagnon et al. (2020a)) using that x i can be written as a linear combination of p other covariate vectors and the explicit form of the linear-regression model. Other non-conflicting data points are thus such that (1/σ) f ((y i − x T i β)/σ) are close to 0. The argument is technical and essentially consists in isolating cases where the parameters are such that the densities of conflicting sources are evaluated in the tails and those where the parameters are instead such that the densities of non-conflicting sources are evaluated in the tails; there is no reason to believe that the result does not hold for other regression models, and in particular, in the general regression framework presented in Section 1.1 including GLMs, perhaps under different assumptions. We believe that even though it turns out that the assumptions are indeed different, they will be similar in essence.
The assumption that n + |C c | ≥ 2p − 1 + |C b | is essentially to ensure that the non-conflicting sources of information are dominant. It is a consequence of: when β j is not far from µ j , possibly |C b | nonconflicting sources are required to get rid of terms g j (µ j ) −1 in (8), and imagine that the number of non-conflicting sources left is 2p − 1, then p − 1 of them may be close to hyperplanes such that β j is not far from µ j and, using the decomposition in Gagnon et al. (2020a), it is shown that p non-conflicting sources are sufficient to obtain an integrable function.
By looking at (6), we see that we get rid asymptotically of all the conflicting priors and no trace is left; the resulting limiting posterior distribution is that with improper Jeffreys priors π j (β j | σ) ∝ 1, for j ∈ C. We thus have a characterization of the limiting behaviour of the posterior distribution/density and estimates like maximum a posteriori probability (MAP) estimates and posterior medians. It is possible to show under additional mild assumptions that the posterior expectations and the joint posterior distribution of a model indicator and parameters in a context of variable selection converge as well. All these results thus characterize the limiting behaviour of a variety of Bayes estimators. Analogous results hold in the situations that are presented in Sections 3.3 and 3.4.

Partial information
Now, consider that we have set values for all µ j and λ j , and we know that C d = ∅. In practice, the situation is rather that a practitioner is confident that there will be no conflict due to small scalings. We now describe how to set the priors in this case and the limiting behaviour of the posterior distribution if the practitioner turns out to be right. Given that each of the priors on the regression coefficients is exposed to a potential conflict due to a prior location significantly different than that of the likelihood, we set g j = g LPTN for all j. The advantage here is that, because no CTN distribution is used, no adjustment on the prior of σ is required to yield, as in the previous section, a limiting posterior distribution without a trace of conflict and with improper Jeffreys priors π j (β j | σ) ∝ 1, for j ∈ C. The prior on σ is thus set to π( · ) and we assume that π(σ) is bounded above by a constant or a constant times 1/σ, for all σ > 0, as before.
Theorem 2. Assume that C d = ∅. Assume that for all j, g j = g LPTN . Assume that the prior on σ is π( · ) and that π(σ) ≤ max(M, σ −1 M) for all σ > 0. Assume that n + |C c | ≥ 2p − 1 + |C b |. Under the framework described in Section 3.1 (recall in particular the form of the prior distribution (1), the definition of the posterior distribution (5), and that µ j = a j + b j ω and λ j = c j + d j ω), and as ω → ∞, the posterior distribution converges: where π( · , · | y) is defined as in (6).
Theorem 2 is an adaptation of Theorem 1 in which it is considered that C d = ∅, which implies that C b = C. Also, the proof of Theorem 2 is an adaptation of that of Theorem 1. For the same reasons as those explained in Section 3.2, we thus believe that Theorem 2 holds in the general regression framework presented in Section 1.1 including GLMs, perhaps under different, yet similar, assumptions. A difference between Theorem 2 and Theorem 1 is that, because we do not know which of the priors will be in conflict (if any) and thus set all g j = g LPTN , the prior distributions in the limiting posterior for j ∈ C c are thus all LPTN distributions; in Theorem 1, they can be selected to be otherwise, provided that they are proper distributions with densities having tails not more heavy than those of LPTN densities. Using LPTN prior distributions is perhaps not the first choice for a practitioner, but this comes with protection, as seen in Theorem 2.
It is possible to prove a similar result to Theorem 2 if instead we set g j to a Student distribution for each j. The difference is that the limiting posterior distribution is defined otherwise than in (6). It is instead such that reflecting that Student distributions asymptotically leave a trace in case of conflict, namely σ γ for each of the conflicting priors, and thus that they only partially resolves conflicts due to significantly different locations.

No information
In the last scenario, we consider that after setting all µ j and λ j , we have no reason to believe that these choices of locations and scalings will create conflicts, but we want to be protected in case it happens. We thus set g j = g CTN for all j to be prepared for all eventualities. As mentioned, the main disadvantage of using CTN priors is that they are improper. Their densities do not integrate and each g j is multiplied by λ j /σ to yield π j,ω (β j | σ) (recall (1)). This implies that these densities cannot be used to integrate over β when verifying, for instance, that π ω ( · , · | y) is proper; the best that can be done is to bound them by a constant (that possibly depends on ω) that is multiplied by σ −p , and to use the (conditional) densities of Y 1 , . . . , Y n to integrate over β (requiring n ≥ p). Therefore, in order to obtain a proper posterior distribution, the prior on σ needs to be such that σ −p π(σ) dσ < ∞. The good news is that setting π( · ) such that σ 2 has an inverse-gamma distribution, as often done in practice (West, 1984;Raftery et al., 1997), implies that σ −p π(σ) dσ < ∞ for any p, and choice of shape and scale parameters for the inverse-gamma distribution.
Theorem 3. Assume that for all j, g j = g CTN . Assume that the prior on σ is π( · ) and that it is such that σ −p π(σ) dσ < ∞. Assume that n ≥ p. Under the framework described in Section 3.1 (recall in particular the form of the prior distribution (1), the definition of the posterior distribution (5), and that µ j = a j + b j ω and λ j = c j + d j ω), and as ω → ∞, the posterior distribution converges: The proof of Theorem 3 is much simpler than those of Theorems 1 and 2. While still using Lebesgue's dominated convergence theorem, the term that is sufficient to bound by an integrable function of β and σ that does not depend on ω is π(β, σ | y) which is thus bounded by a function that does not depend on ω, contrarily to (8). The core of the proof of Theorem 3 is essentially devoted to proving that π( · , · | y) is proper (requiring n ≥ p and that σ −p π(σ) dσ < ∞ in our case, as seen in Theorem 3). In other words, Theorem 3 holds for any of the regression model fitting in the general regression framework presented in Section 1.1 including GLMs, provided that the prior distribution exhibit a conditional-independence structure as in (1) and the resulting limiting posterior distribution (9) is proper. With Theorem 3, it is thus even clearer than with Theorems 1 and 2 that the result holds in the general regression framework presented in Section 1.1, perhaps under different, yet similar, assumptions.
As mentioned previously, a weakness of using CTN prior distributions is that a trace asymptotically remains in case of conflict, namely σ −|C| , as seen in (9). This is similar to what happens when using Student prior distributions (recall the discussion at the end of Section 3.3). However, there is an important difference: CTN prior distributions are effective against all types of conflicting situations, including those due to conflicting prior scalings, contrarily to Student prior distributions. Also, the degree of discrepancy between the resulting limiting posterior distribution and the ideal one (obtained in Theorems 1 and 2), measured through the exponent of σ, does not depend on the level of similarity between the CTN distributions used and the standard normal (measure through the parameter ϱ). With Student prior distributions, the degree of discrepancy between the resulting limiting posterior distribution and the ideal one depends on the degrees of freedom γ. Recall that we recommend to not alter the prior of σ with the aim of correcting for a discrepancy because we do not know |C| a priori and thus an adjustment can cause more harm than good.

Simulation study
A goal with this section is to show the impact of using an informative prior instead of a non-informative one, especially in the situation where the former is conflicting. Another goal is to identify suitable values for the hyperparameters of the heavy-tailed priors. We achieve all that through a simulation study; it suggests that γ = 4 degrees of freedom for Student prior distributions, ρ = 0.95 for LPTN prior distributions and ϱ = 0.98 for CTN prior distributions are suitable values. For the simulation study, we consider the normal-linear-regression framework, i.e. Y i = x T i β + σε i with f = N(0, 1). For the reasons mentioned in Section 3, we expect the results to be similar in other regression frameworks, such as with GLMs. To simplify, we consider that the covariates are orthogonal and that the variables are standardized, i.e. (1/n) n i=1 y i = 0 and (1/n) n i=1 y 2 i = 1, (1/n) n i=1 x i j = 0 and (1/n) n i=1 x 2 i j = 1 for all j (except for j = 1 for which (1/n) n i=1 x i1 = 1), and n i=1 x i j x is = 0 for j s. Under this framework, the likelihood function exhibits a hierarchical and product form and is proportional to: where ∥ · ∥ 2 is the Euclidian norm andŷ := Xβ, X being the design matrix andβ := (β 1 , . . . ,β p ) T the OLS estimate, which in this case is such thatβ j = (1/n) n i=1 x i j y i . With this likelihood form, setting any prior π j,ω ( · | σ) on β j with µ j =β j yields the same marginal posterior distributions of the other coefficients regardless of the values ofβ j and λ j , as long asŷ is the same. To simplify, we consider that µ j =β j for all coefficients except one, namely β 2 , that will be used to show the impact of different choices for µ 2 , λ 2 and g 2 to achieve our aforementioned goals. We also consider to simplify thatβ = 0, so that the marginal posterior distribution of β 2 only depends on n (not on p and the covariate data points); we set n = 100. It can be readily verified that n > 3 is sufficient to ensure a proper posterior distribution, even if the prior distributions of β 2 and σ are improper Jeffreys priors. This condition is satisfied in the simulation study, and thus to simplify, we set the prior on σ to the Jeffreys prior: π(σ) ∝ 1/σ. The non-informative Jeffreys prior on β 2 will serve as a benchmark, i.e. π 2,ω (β 2 | σ) ∝ 1, implying a posterior mean and variance of 0 and 1/(n − 3), respectively.
We now describe the simulation study.
We present the results for 4 choices of informative g 2 : a standard normal distribution, a Student distribution, a LPTN distribution and a CTN distribution. We compare them with one another and to the non-informative prior.
• While keeping λ 2 fixed and equal to 1, we gradually increase µ 2 from 0 to 2. With this choice of λ 2 , when µ 2 = 0 the prior carries essentially the same information as the likelihood. We show the impact of more diffuse priors next. The results are presented in Figures 6 (a)-(b) and 7. Note that we observe similar results when considering a larger prior scaling, but we need to use an interval for µ 2 with a larger upper bound.
• While keeping µ 2 fixed and equal to 0.5 (to be able to appreciate a difference in location when the prior scaling conflicts), we gradually increase λ 2 from (nearly) 0 to 2. The results are presented in Figures 6 (c)-(d). Figure 6 is used to compare the results produced by using different priors, while Figure 7 is used to show the impact of different choices of hyperparameters for the heavy-tailed priors, both in conflicting and non-conflicting situations. In Figure 6, we observe what has been explained before. Firstly, a Student prior resolves a conflict due to a prior location significantly different than that of the likelihood slower than a LPTN prior, i.e. the convergence towards the limiting posterior distribution is slower as µ 2 → ∞. Here, the limiting posterior resulting from a Student prior is not much different to that resulting from a LPTN prior; in both cases, the distribution of β 2 | σ, y is the same, but the distribution of σ | y is such that σ 2 follows an inverse-gamma and in the former case the shape and scale parameters are (n − γ − 2)/2 = 47 and n/2, respectively, whereas in the latter case, they are (n − 2)/2 = 49 and n/2, respectively. Similar arguments explains why, if we set g 2 to a CTN distribution and set the prior density of σ such that it is proportional to σπ(σ) to correct for the trace asymptotically left by a CTN prior distribution, we obtain essentially the same estimates and standard deviations as if we did not correct for this trace and instead set the prior on σ to π( · ) (the lines are on top of each other in Figure 6). In practice, one does the latter. Note that when we correct for that trace, we do it regardless of the values of µ 2 and λ 2 , and therefore, for some values, we should not correct because the situations are non-conflicting. The correction is needed in the asymptotic regime, which is something theoretical, explaining why we did not discriminate.
In Figure 6, we also observe that using a Student or a LPTN prior is ineffective at resolving a conflict due to extremely small scalings (represented by λ 2 → ∞), contrarily to using a CTN prior. A last point to note in Figure 6 is that, because the LPTN distribution is the most similar to the standard normal distribution among the heavy-tailed distributions presented, using a LPTN prior translates into the closest results with those produced by using a normal one when there is no conflict, but also into the largest impact in the "gray" area, i.e. in between no conflict and clear conflict.
In Figure 7, we observe that increasing the level of similarity between an heavy-tailed prior and a normal prior (controlled through γ, ρ and ϱ for the Student, LPTN and CTN prior distributions, respectively) increases the threshold at which the Bayesian model starts to detect that the prior is conflicting (i.e. the point beyond which the impact starts to decrease) and thus increases the impact on the posterior distribution and estimate at this threshold. Our simulation study suggests that γ = 4, ρ = 0.95 and ϱ = 0.98 offer a good balance between great similarity with the standard normal (and thus great similarity in between the posterior distributions in the absence of conflict) and great capacity at detecting and resolving a conflict (due to a prior location significantly different than that of the likelihood for Student and LPTN priors). The impact of hyperparameters when there is a conflict due to small scalings is not shown because showing it is relevant only for CTN prior distributions, and the impact is similar as when the conflict is due to a prior location significantly different than that of the likelihood. i.e. π 2,ω (β 2 | σ) ∝ 1 (black line), and when g 2 is a standard normal (red line), a Student with γ = 4 degrees of freedom (orange line), a LPTN with ρ = 0.95 (dark green line), a CTN with ϱ = 0.98 and a CTN with ϱ = 0.98 but where the prior density of σ is proportional to σπ(σ) (both lines are light green, one is dashed, while the other one not; they are on top of each other); here SD stands for standard deviation

Conclusion
In this paper, we characterized the impact of using heavy-tailed alternatives to normal prior distributions for regression coefficients. This was achieved through a theoretical analysis under an asymptotic regime for which a conflicting situation becomes extreme and a simulation study, in Sections 3 and 4, respectively. The heavy-tailed alternatives are Student, LPTN and CTN prior distributions. With the results presented in hand, one is well equipped to decide which prior distributions to use for a Bayesian regression analysis. In summary, normal prior distributions can be used when one is confident that they will not be in conflict with the data to collect; otherwise, heavy-tailed alternatives should be employed. All heavy-tailed alternatives can be used in a situation of a potential conflict due to a prior location significantly different than that of the likelihood function. Using Student and CTN prior distributions has an impact on the posterior variability of all coefficients asymptotically as the conflict becomes extreme; the variability increases when using Student prior distributions, while it decreases when using CTN prior distributions. The impact is however small when the sample size is large relatively to the number of conflicting prior densities. Note that this is however only true for Student priors with small degrees of freedom. When the priors on the regression coefficients are such that one is exposed to potential conflicts due to prior scalings, the heavy-tailed alternative that is recommended is the CTN distribution.
The theoretical analysis performed in Section 3 was under the framework of linear regression. While there is no reason to believe that the results do not hold under other regression frameworks, like with GLMs, it would be interesting to prove similar results under such frameworks to have a confirmation and to have access to precise statements describing the conditions under which the results hold.

Acknowledgements
The author acknowledges support from NSERC (Natural Sciences and Engineering Research Council of Canada) and FRQNT (Le Fonds de recherche du Québec -Nature et technologies). Also, the author thanks two anonymous referees and an associate editor for helpful suggestions that led to an improved manuscript.

A Proofs
The proof of Proposition 1 can be found in Desgagné (2015). In this section, we present the proofs of Theorems 1 and 3. The proof of Theorem 2 is an adaptation of that of Theorem 1 where we consider that C d = ∅ and is thus omitted.
Proof of Theorem 1. First, we prove that m ω (y) with m ω (y)/[ j∈C b g j (µ j ) j∈C d λ j g j (κ)] < ∞ and m(y) < ∞. Next, we prove that the posterior density converges pointwise. Finally, we prove the convergence of the posterior distribution.
Assume for now that m ω (y) < ∞ and m(y) < ∞; this will be shown later. We first observe that m ω (y) where we used that We show that the last integral converges towards 1 as ω → ∞. If we use Lebesgue's dominated convergence theorem to interchange the limit ω → ∞ and the integral, we have by Proposition 1, the definition of CTN density (see, e.g., (4) in the manuscript) and using that the limiting posterior density is proper (which will be proven later). Note that pointwise convergence is sufficient, for any value of β ∈ R p and σ > 0, once the limit is inside the integral. Note also that we do not have convergence on the set j {β j : j ∈ C d and β j = µ j }, but this set has null measure so it does not affect the integral and the limit. In order to use Lebesgue's dominated convergence theorem, we need to prove that the integrand is bounded above by an integrable function of β and σ that does not depend on ω. Under the framework described in Section 3.1, we have that all g j are bounded and strictly positive, therefore we can choose the constant M such that, for all j ∈ C d , We have that using that, for j ∈ C c , because g j /g LPTN ≤ M and considering that we can choose the constant M such that λ j = c j ≤ M, and finally, that for j ∈ C b , g j = g LPTN and considering that we can choose the constant M such that assuming that m LPTN (y) < ∞ (we prove this below). We can prove that π LPTN (β, σ | y) is bounded above by an integrable function of β and σ that does not depend on ω in the same way that it is done in the proof of Theorem 2.1, Result (a), in Gagnon et al. (2020b) because the function above represents a special case of that in that proof. In Gagnon et al. (2020a), the theoretical result is about the convergence of the posterior distribution as some y j → ±∞ in a context of robustness against outliers. It is considered that the joint prior on all parameters is non-conflicting and that it is bounded above by max(M, σ −1 M). To fit within the framework of Gagnon et al. (2020a), we treat λ j µ j for j ∈ C c C b as an observation from the dependent variable and λ j that multiplies β j for j in the same set as a vector of covariates where the other covariates are all equal to 0. There is no problem with the fact that the first component of these vectors are not 1. The sample size in the framework of Gagnon et al. (2020a) thus corresponds to n + |C c | + |C b | here.
What allows to exploit the proof in Gagnon et al. (2020b) is that the assumptions of Theorem 2.1 are verified. It is readily seen that the following allows to verify the assumptions: • the density of all "observations" (including the prior distributions with j ∈ C c C b ) are LPTN; • π(σ) ≤ max(M, σ −1 M); This concludes the proof that m ω (y) assuming that m LPTN (y) < ∞, m(y) < ∞ and m ω (y) < ∞ for all ω.
We now show that under the conditions above, m LPTN (y) < ∞, which will be seen to imply that m(y) < ∞ and m ω (y)/[ j∈C b g j (µ j ) j∈C d λ j g j (κ)] < ∞ (which in turn implies that m ω (y) < ∞ for all ω). We proceed as follows: first we show that m(y) is bounded above by a constant times m LPTN (y), next we show that m LPTN (y) < ∞. This will allow to conclude that m ω (y)/[ j∈C b g j (µ j ) j∈C d λ j g j (κ)] < ∞ because we will have shown that m ω (y) is bounded above by a constant times an integral of an integrable function.
We have using the same arguments as above. Proving that m LPTN (y) is finite is done in the same way as in the proof of Proposition 2.1 in Gagnon et al. (2020b) because the integrand above represents a special case of that in Gagnon et al. (2020b). As previously, what allows to exploit the proof in Gagnon et al. (2020b) is that the assumptions of Proposition 2.1 are verified. It is readily seen that the following allows to verify the assumptions: • the density of all "observations" (including the prior distributions with j ∈ C c C b ) are LPTN; • π(σ) ≤ max(M, σ −1 M); • n + |C c | ≥ 2p + 1 + |C b |, implying that n + |C c | ≥ p + 1.
We now prove that the posterior density converges pointwise. We have that for any β ∈ R p , σ > 0 using Proposition 1, the definition of CTN density (see, e.g., (4) in the manuscript) and the asymptotic behaviour of the marginal density, except on j {β j : j ∈ C d and β j = µ j }. On this set, for some j. Therefore, the limiting value for π ω (β, σ | y) on this set is π(β, σ | y) times a factor. This concludes the proof that the posterior density converges pointwise, except on a set of null measure. Now that we know that the posterior density converges pointwise (except on a set of null measure), the convergence of the posterior distribution follows directly using Scheffé's theorem (see Scheffé (1947)). ■ Proof of Theorem 3. We proceed as in the previous proof: We show that the last integral converges towards 1 as ω → ∞. If we use Lebesgue's dominated convergence theorem to interchange the limit ω → ∞ and the integral, we have π(β, σ | y) dσ dβ = 1, using the definition of CTN density (see, e.g., (4) in the manuscript) and using that the limiting posterior distribution is proper (which will be proven later). Note that pointwise convergence is sufficient, for any value of β ∈ R p and σ > 0, once the limit is inside the integral. Note also that we do not have convergence on the set j {β j : j ∈ C d and β j = µ j }, but this set has null measure so it does not affect the integral and the limit.
In order to use Lebesgue's dominated convergence theorem, we need to prove that the integrand is bounded above by an integrable function of β and σ that does not depend on ω. The proof under the framework of Theorem 3 is easier than that under the framework of Theorem 1 and does not rely on the proof of Theorem 2.1 in Gagnon et al. (2020b). Under the framework described in Section 3.1 and by the definition of CTN density, we have π(β, σ | y) because g CTN is bounded from above and it is strictly positive. There thus only remains to prove that π( · , · | y) is proper, which will imply that m ω (y) < ∞ for all ω. Indeed, to prove that π( · , · | y) is proper, we prove that m(y) < ∞; we will thus have shown that m ω (y) j∈C λ j g j (κ) m(y) is bounded above by a constant times an integral of an integrable function and that m(y) < ∞.
We prove that m(y) < ∞ similarly as in the proof of Proposition 2.1 in Gagnon et al. (2020b). We have that because we can choose M such that for all j ∈ C c ; recall that for j ∈ C c , λ j = c j . We now prove that the integral above is finite. We first show that the function is integrable on an area where the ratio 1/σ is bounded. More precisely, we consider β ∈ R p and δM −1 ≤ σ < ∞, where δ is a positive constant that can be chosen as small as we want (upper bounds are provided in the proof). We next show that the function is integrable on the complement set where the ratio 1/σ approaches infinity, that is 0 < σ < δM −1 . We have In Step a, we bound each of n − p densities f by M, requiring that n ≥ p, and p + n − p = n times σ −1 using σ −1 ≤ δ −1 M. In Step b, we use that [δM −1 , ∞) ⊂ R and the change of variables u i = (y i − x T i β)/σ for i = 1, . . . , p. The determinant is non-null because all explanatory variables are continuous. In Step c, we use that ∞ 0 σ −p π(σ) dσ < ∞ and f is a proper density. We now show that the integral is finite on β ∈ R p and 0 < σ < δM −1 . On this area, the ratio (1/σ) approaches infinity. We have to carefully analyse the sub-areas where the terms y i − x T i β are close to 0 in order to deal with the 0/0 form of the ratios (y i − x T i β)/σ. To achieve this, we split the domain of β as follows: ..,i p =1(i j i s ∀i j ,i s s.t. j s) where R i := {β : |y i − x T i β| < δ}, i ∈ {1, . . . , n}. The set R i represents the hyperplanes characterized by the different values of β that satisfy |y i − x T i β| < δ. In other words, it represents the hyperplanes passing near the point (x i , y i ), and more precisely, at a vertical distance of less than δ. The set n i 1 =1 R c i 1 is therefore comprised of the hyperplanes that are not passing close to any point. The set n i 1 =1 (R i 1 ( n i 2 =1(i 2 i 1 ) R c i 2 )) represents the hyperplanes passing near one (and only one) point, and so on.