## Bayesian Analysis

### Mixture Modeling on Related Samples by $\psi$-Stick Breaking and Kernel Perturbation

#### Abstract

There has been great interest recently in applying nonparametric kernel mixtures in a hierarchical manner to model multiple related data samples jointly. In such settings several data features are commonly present: (i) the related samples often share some, if not all, of the mixture components but with differing weights, (ii) only some, not all, of the mixture components vary across the samples, and (iii) often the shared mixture components across samples are not aligned perfectly in terms of their kernel parameters such as the location and spread in Gaussian kernels, but rather display small misalignments either due to systematic cross-sample difference or more often due to uncontrolled, extraneous causes. Properly incorporating these features in mixture modeling will enhance the efficiency of inference, whereas ignoring them not only reduces efficiency but can jeopardize the validity of the inference due to issues such as confounding. We propose to use two techniques for incorporating these features in modeling related data samples using kernel mixtures. The first technique, called $\psi$-stick breaking, is a joint generative process for the mixing weights through the breaking of both a stick shared by all the samples for the components that do not vary in size across samples and an idiosyncratic stick for each sample for those components that do vary in size. The second technique is to imbue random perturbation into the kernels, thereby accounting for cross-sample misalignment. These techniques can be used either separately or together in both parametric and nonparametric kernel mixtures. We derive efficient Bayesian inference recipes based on Markov Chain Monte Carlo (MCMC) sampling for models featuring these techniques, and illustrate their work through both simulated data and a real flow cytometry data set in prediction/estimation and testing multi-sample differences.

#### Article information

Source
Bayesian Anal., Volume 14, Number 1 (2019), 161-180.

Dates
First available in Project Euclid: 19 April 2018

https://projecteuclid.org/euclid.ba/1524124868

Digital Object Identifier
doi:10.1214/18-BA1106

Subjects
Primary: 62F15: Bayesian inference 62G99: None of the above, but in this section
Secondary: 62G07: Density estimation

#### Citation

Soriano, Jacopo; Ma, Li. Mixture Modeling on Related Samples by $\psi$ -Stick Breaking and Kernel Perturbation. Bayesian Anal. 14 (2019), no. 1, 161--180. doi:10.1214/18-BA1106. https://projecteuclid.org/euclid.ba/1524124868

#### References

• Camerlenghi, F., Dunson, D. B., Lijoi, A., Prünster, I., and Rodríguez, A. (2018). “Latent nested nonparametric priors.” ArXiv e-prints:1801.05048.
• Chan, C., Feng, F., Ottinger, J., Foster, D., West, M., and Kepler, T. B. (2008). “Statistical mixture modeling for cell subtype identification in flow cytometry.” Cytometry Part A, 73(8): 693–701.
• Cron, A., Gouttefangeas, C., Frelinger, J., Lin, L., Singh, S. K., Britten, C. M., Welters, M. J., van der Burg, S. H., West, M., and Chan, C. (2013). “Hierarchical modeling for rare event detection and cell subset alignment across flow cytometry samples.” PLoS Computational Biology, 9(7): e1003130.
• Dunson, D. B. (2009). “Nonparametric Bayes local partition models for random effects.” Biometrika, 96(2): 249–262. URL http://www.jstor.org/stable/27798822
• Escobar, M. D. and West, M. (1995). “Bayesian Density Estimation and Inference Using Mixtures.” Journal of the American Statistical Association, 90(430): 577–588.
• Ferguson, T. S. (1973). “A Bayesian Analysis of Some Nonparametric Problems.” Annals of Statistics, 1(2): 209–230. URL http://dx.doi.org/10.1214/aos/1176342360
• Fraley, C. and Raftery, A. E. (2002). “Model-Based Clustering, Discriminant Analysis, and Density Estimation.” Journal of the American Statistical Association, 97(458): 611–631. URL http://www.jstor.org/stable/3085676
• Green, P. J. and Richardson, S. (2001). “Modelling heterogeneity with and without the Dirichlet process.” Scandinavian Journal of Statistics, 28(2): 355–375.
• Ishwaran, H. and James, L. F. (2001). “Gibbs sampling methods for stick-breaking priors.” Journal of the American Statistical Association, 96(453).
• Ishwaran, H. and Zarepour, M. (2002). “Exact and approximate sum representations for the Dirichlet process.” Canadian Journal of Statistics, 30(2): 269–283.
• Jara, A., Hanson, T. E., Quintana, F. A., Müller, P., and Rosner, G. L. (2011). “DPpackage: Bayesian semi-and nonparametric modeling in R.” Journal of Statistical Software, 40(5): 1.
• Kingman, J. F. (1975). “Random discrete distributions.” Journal of the Royal Statistical Society. Series B (Methodological), 1–22.
• Kurihara, K., Welling, M., and Teh, Y. W. (2007). “Collapsed Variational Dirichlet Process Mixture Models.” In IJCAI, volume 7, 2796–2801.
• Lock, E. F. and Dunson, D. B. (2013). “Bayesian consensus clustering.” Bioinformatics, 29(20): 2610–2616. URL http://dx.doi.org/10.1093/bioinformatics/btt425
• Lopes, H. F., Müller, P., and Rosner, G. L. (2003). “Bayesian Meta-analysis for Longitudinal Data Models Using Multivariate Mixture Priors.” Biometrics, 59(1): 66–75.
• MacEachern, S. N. (2008). “Discussion of “The nested Dirichlet process” by A.E. Gelfand, D.B. Dunson and A. Rodriguez.” Journal of the American Statistical Association, 103: 1149–1151.
• MacEachern, S. N. and Müller, P. (1998). “Estimating Mixture of Dirichlet Process Models.” Journal of Computational and Graphical Statistics, 7(2): 223–238.
• Marin, J.-M., Mengersen, K., and Robert, C. P. (2005). “Bayesian Modelling and Inference on Mixtures of Distributions.” In Dey, D. and Rao, C. (eds.), Bayesian Thinking: Modeling and Computation, volume 25 of Handbook of Statistics, 459–507. Elsevier.
• Muliere, P. and Tardella, L. (1998). “Approximating distributions of random functionals of Ferguson-Dirichlet priors.” Canadian Journal of Statistics, 26(2): 283–297.
• Müller, P., Quintana, F., and Rosner, G. (2004). “A method for combining inference across related nonparametric Bayesian models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66(3): 735–749.
• Neal, R. M. (2000). “Markov chain sampling methods for Dirichlet process mixture models.” Journal of Computational and Graphical Statistics, 9(2): 249–265.
• Pitman, J. and Yor, M. (1997). “The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator.” The Annals of Probability, 855–900.
• Rodriguez, A. and Dunson, D. B. (2014). “Functional clustering in nested designs: Modeling variability in reproductive epidemiology studies.” Annals of Applied Statistics, 8(3): 1416–1442.
• Rodríguez, A., Dunson, D. B., and Gelfand, A. E. (2008). “The Nested Dirichlet Process.” Journal of the American Statistical Association, 103(483): 1131–1154.
• Sethuraman, J. (1994). “A Constructive Definition of Dirichlet Priors.” Statistica Sinica, 4: 639–650.
• Soriano, J. and Ma, L. (2019). “Supplementary Materials for “Mixture modeling on related samples by $\psi$-stick breaking and kernel perturbation”.” Bayesian Analysis.
• Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). “Hierarchical Dirichlet processes.” Journal of the American Statistical Association, 101(476).