Electronic Journal of Statistics

Solution path clustering with adaptive concave penalty

Yuliya Marchetti and Qing Zhou

Full-text: Access has been disabled (more information)

Abstract

Fast accumulation of large amounts of complex data has created a need for more sophisticated statistical methodologies to discover interesting patterns and better extract information from these data. The large scale of the data often results in challenging high-dimensional estimation problems where only a minority of the data shows specific grouping patterns. To address these emerging challenges, we develop a new clustering methodology that introduces the idea of a regularization path into unsupervised learning. A regularization path for a clustering problem is created by varying the degree of sparsity constraint that is imposed on the differences between objects via the minimax concave penalty with adaptive tuning parameters. Instead of providing a single solution represented by a cluster assignment for each object, the method produces a short sequence of solutions that determines not only the cluster assignment but also a corresponding number of clusters for each solution. The optimization of the penalized loss function is carried out through an MM algorithm with block coordinate descent. The advantages of this clustering algorithm compared to other existing methods are as follows: it does not require the input of the number of clusters; it is capable of simultaneously separating irrelevant or noisy observations that show no grouping pattern, which can greatly improve data interpretation; it is a general methodology that can be applied to many clustering problems. We test this method on various simulated datasets and on gene expression data, where it shows better or competitive performance compared against several clustering methods.

Article information

Source
Electron. J. Statist. Volume 8, Number 1 (2014), 1569-1603.

Dates
First available in Project Euclid: 8 September 2014

Permanent link to this document
http://projecteuclid.org/euclid.ejs/1410181225

Digital Object Identifier
doi:10.1214/14-EJS934

Mathematical Reviews number (MathSciNet)
MR3263131

Zentralblatt MATH identifier
1297.62142

Subjects
Primary: 62H30: Classification and discrimination; cluster analysis [See also 68T10, 91C20] 62J07: Ridge regression; shrinkage estimators
Secondary: 68T05: Learning and adaptive systems [See also 68Q32, 91E40]

Keywords
Clustering sparsity concave regularization coordinate descent MM algorithm

Citation

Marchetti, Yuliya; Zhou, Qing. Solution path clustering with adaptive concave penalty. Electron. J. Statist. 8 (2014), no. 1, 1569--1603. doi:10.1214/14-EJS934. http://projecteuclid.org/euclid.ejs/1410181225.


Export citation

References

  • Aggarwal, C. C. and Reddy, C. K. (2013). Data Clustering: Algorithms and Applications. Chapman & Hall/CRC Data Mining and Knowledge Discovery Series. Chapman and Hall/CRC.
  • Arthur, D. and Vassilvitskii, S. (2007). K-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms 1027–1035. Society for Industrial and Applied Mathematics.
  • Ben-Hur, A., Horn, D., Siegelmann, H. T. and Vapnik, V. (2001). Support vector clustering. J. Mach. Learn. Res. 2 125–137.
  • Bensmail, H., Celeux, G., Raftery, A. E. and Robert, C. (1997). Inference in model-based cluster analysis. Statist. Comput. 7 1–10.
  • Byers, S. and Raftery, A. E. (1998). Nearest-neighbor clutter removal for estimating features in spatial point processes. J. Amer. Statist. Assoc. 93 577–584.
  • Chi, E. C. and Lange, K. (2013). Splitting methods for convex clustering. Preprint, available at arXiv:1304.0499 [stat.ML].
  • De Smet, F., Mathys, J., Machal, K., Thijis, G., De Moor, B. and Moreau, Y. (2002). Adaptive quality-based clustering of gene expression profiles. Bioinformatics 18 735–746.
  • Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least angle regression (with discussion). Ann. Statist. 32 407–499.
  • Fan, J. and Li, R. (2001). Variable selection via non-concave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348–1360.
  • Fang, Y. and Wang, J. (2012). Selection of the number of clusters via the bootstrap method. Comput. Stat. Data Anal. 56 468–477.
  • Forero, P., Kekatos, V. and Giannakis, G. B. (2012). Robust clustering using outlier-sparsity regularization. IEEE Transactions on Signal Processing 60 4163–4177.
  • Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discriminant analysis, and density estimation. J. Amer. Statist. Assoc. 97 611–631.
  • Friedman, J., Hastie, T. and Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. J. Statist. Softw. 33 1–22.
  • Friedman, J., Hastie, T., Hofling, H. and Tibshirani, R. (2007). Pathwise coordinate optimization. Ann. Appl. Statist. 1 302–332.
  • Fu, F. and Zhou, Q. (2013). Learning sparse causal Gaussian networks with experimental intervention: regularization and coordinate descent. J. Amer. Statist. Assoc. 108 288–300.
  • Garcia-Escudero, L. A., Gordaliza, A., Matran, C. and Mayo-Iscar, A. (2008). A general trimming approach to robust cluster analysis. Ann. Statist. 36 1324–1345.
  • Guo, J., Levina, E., Michailidis, G. and Zhu, J. (2010). Pairwise variable selection for high-dimensional model-based clustering. Biometrics 66 793–804.
  • Hastie, T., Tibshirani, R. and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2 ed., Springer.
  • Hastie, T., Tibshirani, R., Eisen, M. B., Alizadeh, A., Levy, R., Staudt, L., Chan, W., Botstein, D. and Brown, P. (2000). Gene shaving as a method for identifying distinct sets of genes with similar expression patterns. Genome Biol. 1 0003.1–0003.21.
  • Hocking, T. D., Joulin, A., Bach, F. and Vert, J.-P. (2011). Clusterpath: an algorithm for clustering using convex fusion penalties. In 28th International Conference on Machine Learning.
  • Hubert, J. and Arabie, P. (1985). Comparing partitions. J. Classif. 2 193–218.
  • Jain, A. K. (2010). Data clustering: 50 years beyond k-means. Pattern Recognition Lett. 31 651–666.
  • Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE 78 1464–1479.
  • Kulis, B. and Michael, I. J. (2011). Revisiting k-means: new algorithms via bayesian nonparametrics. Preprint, available at arXiv:1111.0352.
  • Lange, K. (1995). A gradient algorithm locally equivalent to the EM algorithm. J. Roy. Statist. Soc. Ser. B 57 425–437.
  • Lange, K. (2004). Optimization, 1 ed., Springer, New York, NY.
  • Lange, K., Hunter, D. R. and Yang, I. (2000). Optimization transfer using surrogate objective functions. J. Comput. Graph. Statist. 9 1–20.
  • Lindsten, F., Ohlsson, H. and Ljung, L. (2011). Just relax and come clustering! a convexication of k-means clustering Technical Report, Linköping University, Department of Electrical Engineering, Automatic Control.
  • Maitra, R. and Ramler, I. P. (2009). Clustering in the presence of scatter. Biometrics 65 341–352.
  • Mazumder, R., Friedman, J. and Hastie, T. (2011). SparseNet: coordinate descent with non-convex penalties. J. Amer. Statist. Assoc. 106 1125–1138.
  • Mclachlan, G. J., Bean, R. W. and Peel, D. (2002). A mixture model-based approach to the clustering of microarray expression data. Bioinformatics 18 413–422.
  • Oh, M. S. and Raftery, A. E. (2007). Model-based clustering with dissimilarities: a bayesian approach. J. Comput. Graph. Statist. 16 559–585.
  • Pan, W. and Shen, X. (2007). Penalized model-based clustering with application to variable selection. J. Mach. Learn. Res. 8 1145–1164.
  • Pan, W., Shen, X. and Liu, B. (2013). Cluster analysis: unsupervized learning via supervized learning with a non-convex penalty. J. Mach. Learn. Res. 14 1865–1889.
  • Pelckmans, K., De Brabanter, J., Suykens, J. and De Moor, B. (2005). Convex clustering shrinkage. In PASCAL Workshop on Statistics and Optimization of Clustering Workshop.
  • Sharan, R., Maron-Katz, A. and Shamir, R. (2003). CLICK and EXPANDER: a system for clustering and visualizing gene expression data. Bioinformatics 19 1787–1799.
  • Shen, Y., Sun, W. and Li, K. C. (2010). Dynamically weighted clustering with noise set. Bioinformatics 26 341–347.
  • Soltanolkotabi, M., Elhamifar, E. and Candes, E. J. (2013). Robust subspace clustering. Preprint, available at arXiv:1301.2603.
  • Sun, W. and Wang, J. (2012). Regularized k-means clustering of high-dimensional data and its asymptotic consistency. Elec. J. Stat. 6 148–167.
  • Thalamuthu, A., Mukhopadhyay, I., Zheng, X. and Tseng, G. C. (2006). Evaluation and comparison of gene clustering methods in microarray analysis. Bioinformatics 22 2405–2412.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288.
  • Tibshirani, R., Walther, G. and Hastie, T. (2001). Estimating the number of clusters in a data set via the gap statistic. J. Roy. Statist. Soc. Ser. B 63 411–423.
  • Tibshirani, R. and Walther, G. (2005). Cluster validation by prediction strength. J. Comput. Graph. Statist. 14 511–528.
  • Tibshirani, R., Saunders, M., Rosset, S., Zhu, J. and Knight, K. (2005). Sparsity and smoothness via the fused lasso. J. Roy. Statist. Soc. Ser. B 67 91–108.
  • Tseng, G. C. (2007). Penalized and weighted k-means for clustering with scattered objects and prior information in high-throughput biological data. Bioinformatics 23 2247–2255.
  • Tseng, G. C. and Wong, W. H. (2005). Tight clustering: a resampling-based approach for identifying stable and tight patterns in data. Biometrics 61 10–16.
  • Tseng, P. and Yun, S. (2009). A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming 117 387–423.
  • von Luxburg, U. (2007). A tutorial on spectral clustering. Stat. Comput. 17 395–416.
  • Wang, H., Neill, J. and Miller, F. (2008). Nonparametric clustering of functional data. Stat. Interface 1 47–62.
  • Wang, S. and Zhu, J. (2008). Variable selection for model-based high-dimensional clustering and its applications to microarray data. Biometrics 64 440–448.
  • Witten, D. and Tibshirani, R. (2010). A framework for feature selection in clustering. J. Amer. Statist. Assoc. 105 713–726.
  • Wu, T. T. and Lange, K. (2008). Coordinate descent algorithms for lasso penalized regression. Ann. Appl. Statist. 2 224–244.
  • Xie, B., Pan, W. and Shen, X. (2008). Variable selection in penalized model-based clustering via regularization on grouped parameters. Biometrics 64 921–930.
  • Yeung, K. Y., Fraley, C., Murua, A., Raftery, A. E. and Ruzzo, W. L. (2001). Model-based clustering and data transformations for gene expression data. Bioinformatics 17 977–987.
  • Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables. J. Roy. Statist. Soc. Ser. B 68 49–67.
  • Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 38 894–942.
  • Zhou, H., Pan, W. and Shen, X. (2009). Penalized model-based clustering with unconstrained covariance matrices. Elec. J. Stat. 3 1473–1496.
  • Zhou, Q., Chipperfield, H., Melton, D. A. and Wong, W. H. (2007). A gene regulatory network in mouse embryonic stem cells. PNAS 104 16438–16443.
  • Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. Roy. Statist. Soc. Ser. B 67 301–320.