Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
We consider quantile multiple regression through conditional quantile models, i.e. each quantile is modeled separately. We work in the context of spatially referenced data and extend the asymmetric Laplace model for quantile regression to a spatial process, the asymmetric Laplace process (ALP) for quantile regression with spatially dependent errors. By taking advantage of a convenient conditionally Gaussian representation of the asymmetric Laplace distribution, we are able to straightforwardly incorporate spatial dependence in this process. We develop the properties of this process under several specifications, each of which induces different smoothness and covariance behavior at the extreme quantiles.
We demonstrate the advantages that may be gained by incorporating spatial dependence into this conditional quantile model by applying it to a data set of log selling prices of homes in Baton Rouge, LA, given characteristics of each house. We also introduce the asymmetric Laplace predictive process (ALPP) which accommodates large data sets, and apply it to a data set of birth weights given maternal covariates for several thousand births in North Carolina in 2000. By modeling the spatial structure in the data, we are able to show, using a check loss function, improved performance on each of the data sets for each of the quantiles at which the model was fit.
We study the support properties of Dirichlet process–based models for sets of predictor–dependent probability distributions. Exploiting the connection between copulas and stochastic processes, we provide an alternative definition of MacEachern’s dependent Dirichlet processes. Based on this definition, we provide sufficient conditions for the full weak support of different versions of the process. In particular, we show that under mild conditions on the copula functions, the version where only the support points or the weights are dependent on predictors have full weak support. In addition, we also characterize the Hellinger and Kullback–Leibler support of mixtures induced by the different versions of the dependent Dirichlet process. A generalization of the results for the general class of dependent stick–breaking processes is also provided.
In this paper we derive adaptive non-parametric rates of concentration of the posterior distributions for the density model on the class of Sobolev and Besov spaces. For this purpose, we build prior models based on wavelet or Fourier expansions of the logarithm of the density. The prior models are not necessarily Gaussian.
We propose a general inference framework for marked Poisson processes observed over time or space. Our modeling approach exploits the connection of nonhomogeneous Poisson process intensity with a density function. Nonparametric Dirichlet process mixtures for this density, combined with nonparametric or semiparametric modeling for the mark distribution, yield flexible prior models for the marked Poisson process. In particular, we focus on fully nonparametric model formulations that build the mark density and intensity function from a joint nonparametric mixture, and provide guidelines for straightforward application of these techniques. A key feature of such models is that they can yield flexible inference about the conditional distribution for multivariate marks without requiring specification of a complicated dependence scheme. We address issues relating to choice of the Dirichlet process mixture kernels, and develop methods for prior specification and posterior simulation for full inference about functionals of the marked Poisson process. Moreover, we discuss a method for model checking that can be used to assess and compare goodness of fit of different model specifications under the proposed framework. The methodology is illustrated with simulated and real data sets.
We consider small area estimation under a nested error linear regression model with measurement errors in the covariates. We propose an objective Bayesian analysis of the model to estimate the finite population means of the small areas. In particular, we derive Jeffreys’ prior for model parameters. We also show that Jeffreys’ prior, which is improper, leads, under very general conditions, to a proper posterior distribution. We have also performed a simulation study where we have compared the Bayes estimates of the finite population means under the Jeffreys’ prior with other Bayesian estimates obtained via the use of the standard flat prior and with non-Bayesian estimates, i.e., the corresponding empirical Bayes estimates and the direct estimates.
We deal with Bayesian model selection for beta autoregressive processes. We discuss the choice of parameter and model priors with possible parameter restrictions and suggest a Reversible Jump Markov-Chain Monte Carlo (RJMCMC) procedure based on a Metropolis-Hastings within Gibbs algorithm.
An important issue involved in group decision making is the suitable aggregation of experts’ beliefs about a parameter of interest. Two widely used combination methods are linear and log-linear pools. Yet, a problem arises when the weights have to be selected. This paper provides a general decision-based procedure to obtain the weights in a log-linear pooled prior distribution. The process is based on Kullback-Leibler divergence, which is used as a calibration tool. No information about the parameter of interest is considered before dealing with the experts’ beliefs. Then, a pooled prior distribution is achieved, for which the expected calibration is the best one in the Kullback-Leibler sense. In the absence of other information available to the decision-maker prior to getting experimental data, the methodology generally leads to selection of the most diffuse pooled prior. In most cases, a problem arises from the marginal distribution related to the noninformative prior distribution since it is improper. In these cases, an alternative procedure is proposed. Finally, two applications show how the proposed techniques can be easily applied in practice.
The beta-Bernoulli process provides a Bayesian nonparametric prior for models involving collections of binary-valued features. A draw from the beta process yields an infinite collection of probabilities in the unit interval, and a draw from the Bernoulli process turns these into binary-valued features. Recent work has provided stick-breaking representations for the beta process analogous to the well-known stick-breaking representation for the Dirichlet process. We derive one such stick-breaking representation directly from the characterization of the beta process as a completely random measure. This approach motivates a three-parameter generalization of the beta process, and we study the power laws that can be obtained from this generalized beta process. We present a posterior inference algorithm for the beta-Bernoulli process that exploits the stick-breaking representation, and we present experimental results for a discrete factor-analysis model.
Using a collection of simulated and real benchmarks, we compare Bayesian and frequentist regularization approaches under a low informative constraint when the number of variables is almost equal to the number of observations on simulated and real datasets. This comparison includes new global noninformative approaches for Bayesian variable selection built on Zellner’s -priors that are similar to Liang et al. (2008). The interest of those calibration-free proposals is discussed. The numerical experiments we present highlight the appeal of Bayesian regularization methods, when compared with non-Bayesian alternatives. They dominate frequentist methods in the sense that they provide smaller prediction errors while selecting the most relevant variables in a parsimonious way.