Open Access
2014 Learning mixtures of Bernoulli templates by two-round EM with performance guarantee
Adrian Barbu, Tianfu Wu, Ying Nian Wu
Electron. J. Statist. 8(2): 3004-3030 (2014). DOI: 10.1214/14-EJS981

Abstract

Dasgupta and Shulman [1] showed that a two-round variant of the EM algorithm can learn mixture of Gaussian distributions with near optimal precision with high probability if the Gaussian distributions are well separated and if the dimension is sufficiently high. In this paper, we generalize their theory to learning mixture of high-dimensional Bernoulli templates. Each template is a binary vector, and a template generates examples by randomly switching its binary components independently with a certain probability. In computer vision applications, a binary vector is a feature map of an image, where each binary component indicates whether a local feature or structure is present or absent within a certain cell of the image domain. A Bernoulli template can be considered as a statistical model for images of objects (or parts of objects) from the same category. We show that the two-round EM algorithm can learn mixture of Bernoulli templates with near optimal precision with high probability, if the Bernoulli templates are sufficiently different and if the number of features is sufficiently high. We illustrate the theoretical results by synthetic and real examples.

Citation

Download Citation

Adrian Barbu. Tianfu Wu. Ying Nian Wu. "Learning mixtures of Bernoulli templates by two-round EM with performance guarantee." Electron. J. Statist. 8 (2) 3004 - 3030, 2014. https://doi.org/10.1214/14-EJS981

Information

Published: 2014
First available in Project Euclid: 15 January 2015

zbMATH: 1303.62037
MathSciNet: MR3301299
Digital Object Identifier: 10.1214/14-EJS981

Keywords: clustering , performance bounds , unsupervised learning

Rights: Copyright © 2014 The Institute of Mathematical Statistics and the Bernoulli Society

Vol.8 • No. 2 • 2014
Back to Top