Electronic Journal of Statistics

Classification with asymmetric label noise: Consistency and maximal denoising

Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi, and Clayton Scott

Full-text: Open access

Abstract

In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions.

Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach.

Article information

Source
Electron. J. Statist., Volume 10, Number 2 (2016), 2780-2824.

Dates
Received: August 2015
First available in Project Euclid: 20 September 2016

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1474373835

Digital Object Identifier
doi:10.1214/16-EJS1193

Mathematical Reviews number (MathSciNet)
MR3549019

Zentralblatt MATH identifier
1347.62106

Subjects
Primary: 62H30: Classification and discrimination; cluster analysis [See also 68T10, 91C20]
Secondary: 68T10: Pattern recognition, speech recognition {For cluster analysis, see 62H30}

Keywords
Classification label noise mixture proportion estimation surrogate loss consistency

Citation

Blanchard, Gilles; Flaska, Marek; Handy, Gregory; Pozzi, Sara; Scott, Clayton. Classification with asymmetric label noise: Consistency and maximal denoising. Electron. J. Statist. 10 (2016), no. 2, 2780--2824. doi:10.1214/16-EJS1193. https://projecteuclid.org/euclid.ejs/1474373835


Export citation

References

  • [1] J. M. Adams and G. White. A versatile pulse shape discriminator for charged particle separation and its application to fast neutron time-of-flight spectroscopy., Nuclear Instruments and Methods in Physics Research, 1978.
  • [2] D. Aldous and P. Diaconis. Strong uniform times and finite random walks., Adv. Appl. Math., 8(1):69–97, 1987.
  • [3] S. Ambers, M. Flaska, and S. Pozzi. A hybrid pulse shape discrimination technique with enhanced performance at neutron energies below 500 kev., Nuclear Instruments and Methods in Physics Research A, 638:116–121, 2011.
  • [4] D. Angluin and P. Laird. Learning from noisy examples., Machine Learning, 2:343–370, 1988.
  • [5] J. Aslam and S. Decatur. On the sample complexity of noise-tolerant learning., Inf. Process. Lett., 57:189–195, 1996.
  • [6] P. Bartlett, M. Jordan, and J. McAuliffe. Convexity, classification, and risk bounds., J. American Statistical Association, 101(473):138–156, 2006.
  • [7] G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection., Journal of Machine Learning Research, 11 :2973–3009, 2010.
  • [8] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In, Proceedings of the 11th Annual Conference on Computational Learning Theory, pages 92–100, 1998.
  • [9] C. Bouveyron and S. Girard. Robust supervised classification with mixture models: Learning from data with uncertain labels., Journal of Pattern Recognition, 42 :2649–2658, 2009.
  • [10] C. Brodley and M. Friedl. Identifying mislabeled training data., Journal of Artifcial Intelligence Research, 131–167, 1999.
  • [11] N. H. Bshouty, S. A. Goldman, H. D. Mathias, S. Suri, and H. Tamaki. Noise-tolerant distribution-free learning of general geometric concepts., J. ACM, 45(5):863–890, 1998.
  • [12] A. Buja, W. Stuetzle, and Y. Shen. Loss functions for binary class probability estimation and classification: Structure and applications, manuscript, available at www-stat.wharton.upenn.edu/~buja, 2005.
  • [13] N. Cesa-Bianchi, P. Fischer, E. Shamir, and H.-U. Simon. Randomized hypotheses and minimum disagreement hypotheses for learning with noise. In, Proc. Third European Conf. on Computational Learning Theory, pages 119–133, 1997.
  • [14] V. Denchev, N. Ding, S. V. N. Vishwanathan, and H. Neven. Robust classification with adiabatic quantum optimization. In J. Langford and J. Pineau, editors, Proc. 29th Int. Conf. on Machine Learning, pages 863–870, 2012.
  • [15] L. Devroye, L. Györfi, and G. Lugosi., A Probabilistic Theory of Pattern Recognition. Springer, 1996.
  • [16] N. Ding and S. V. N. Vishwanathan. $t$-logistic regression. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 514–522. 2010.
  • [17] B. Frénay and M. Verleysen. Classification in the presence of label noise: A survey., IEEE Trans. Neural Networks and Learning Systems, 25:845–869, 2014.
  • [18] S. Jabbari. PAC-learning with label noise. Master’s thesis, University of Alberta, December, 2010.
  • [19] A. Kalai and R. Servedio. Boosting in the presence of noise., Symposium on Theory of Computing, pages 196–205, 2003.
  • [20] M. Kearns. Efficient noise-tolerant learning from statistical queries., Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, pages 392–401, 1993.
  • [21] O. Koyejo, N. Natarajan, P. Ravikumar, and I. Dhillon. Consistent binary classification with generalized performance metrics. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2744–2752, 2014.
  • [22] J. Langford. Tutorial on practical prediction theory for classification., J. Machine Learning Research, 6:273–306, 2005.
  • [23] N. Lawrence and B. Schölkopf. Estimating a kernel Fisher discriminant in the presence of label noise., Proceedings of the International Conference in Machine Learning, 2001.
  • [24] E. Lehmann., Testing Statistical Hypotheses. Wiley, New York, 1986.
  • [25] T. Liu and D. Tao. Classification with noisy labels by importance reweighting., IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3):447–461, 2016.
  • [26] P. Long and R. Servido. Random classification noise defeats all convex potential boosters., Machine Learning, 78:287–304, 2010.
  • [27] N. Manwani and P. S. Sastry. Noise tolerance under risk minimization., IEEE Trans. on Cybernetics, 43(3) :1146–1151, 2011.
  • [28] H. Masnadi-Shirazi and N. Vasconcelos. On the design of loss functions for classification: theory, robustness to outliers, and savageboost. In Y. Bengio D. Koller, D. Schuurmans and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1049–1056. 2009.
  • [29] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In, Advances in Neural Information Processing Systems 12, pages 512–518. MIT Press, 2000.
  • [30] A. Menon, B. Van Rooyen, C. S. Ong, and R. Williamson. Learning from corrupted binary labels via class-probability estimation. In F. Bach and D. Blei, editors, Proc. 32th Int. Conf. Machine Learning (ICML), Lille, France, 2015.
  • [31] M. Mohri, A. Rostamizadeh, and A. Talwalkar., Foundations of Machine Learning. MIT Press, 2012.
  • [32] N. Natarajan, I. S. Dhillon, P. Ravikumar, and A. Tewari. Learning with noisy labels. In, Advances in Neural Information Processing Systems 26, 2013.
  • [33] W. Peterson, T. Birdsall, and W. Fox. The theory of signal detectability., Trans. Inst. Radio Engrs., Professional Group on Information Theory, 4(4):171 –212, 1954.
  • [34] U. Rebbapragada and C. Brodley. Class noise mitigation through instance weighting., European Conference on Machine Learning, pages 708–715, 2007.
  • [35] M. D. Reid and R. C. Williamson. Composite binary losses., J. Machine Learning Research, 11 :2387–2422, 2010.
  • [36] S. Sabato and N. Tishby. Multi-instance learning with any hypothesis class., J. Machine Learning Research, 13 :2999–3039, 2012.
  • [37] L. L. Scharf., Statistical Signal Processing. Detection, Estimation, an Time Series Analysis. Addison-Wesley, Reading, MA, 1991.
  • [38] C. Scott. Calibrated asymmetric surrogate losses., Electronic Journal of Statistics, 6:958–992, 2012.
  • [39] C. Scott. Notes on weakly supervised learning, 2014. URL, web.eecs.umich.edu/~cscott/wsl.pdf.
  • [40] C. Scott. A rate of convergence for mixture proportion estimation, with application to learning from noisy labels. In, Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS), 2015.
  • [41] C. Scott, G. Blanchard, and G. Handy. Classification with asymmetric label noise: Consistency and maximal denoising. In, Proc. Conf. on Learning Theory, JMLR W&CP, volume 30, pages 489–511. 2013a.
  • [42] C. Scott, G. Blanchard, G. Handy, S. Pozzi, and M. Flaska. Classification with asymmetric label noise: Consistency and maximal denoising. Technical Report, arXiv:1303.1208, 2013b.
  • [43] I. Steinwart and A. Christmann., Support Vector Machines. Springer, 2008.
  • [44] G. Stempfel and L. Ralaivola. Learning SVMs from sloppily labeled data. In, Proc. 19th Int. Conf. on Artificial Neural Networks: Part I, pages 884–893, 2009.
  • [45] L. Xu, K. Crammer, and D. Schuurmans. Robust support vector machine training via convex outlier ablation., Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), 2006.
  • [46] T. Yang, M. Mahdavi, R. Jin, L. Zhang, and Y. Zhou. Multiple kernel learning from noisy labels by stochastic programming. In J. Langford and J. Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 233–240, New York, NY, USA, 2012. ACM.