Journal of Applied Mathematics

  • J. Appl. Math.
  • Volume 2014, Special Issue (2013), Article ID 729763, 9 pages.

Nonlinear Fault Separation for Redundancy Process Variables Based on FNN in MKFDA Subspace

Ying-ying Su, Shan Liang, Jing-zhe Li, Xiao-gang Deng, Tai-fu Li, and Cheng Zeng

Full-text: Open access


Nonlinear faults are difficultly separated for amounts of redundancy process variables in process industry. This paper introduces an improved kernel fisher distinguish analysis method (KFDA). All the original process variables with faults are firstly optimally classified in multi-KFDA (MKFDA) subspace to obtain fisher criterion values. Multikernel is used to consider different distributions for variables. Then each variable is eliminated once from original sets, and new projection is computed with the same MKFDA direction. From this, differences between new Fisher criterion values and the original ones are tested. If it changed obviously, the effect of eliminated variable should be much important on faults called false nearest neighbors (FNN). The same test is applied to the remaining variables in turn. Two nonlinear faults crossed in Tennessee Eastman process are separated with lower observation variables for further study. Results show that the method in the paper can eliminate redundant and irrelevant nonlinear process variables as well as enhancing the accuracy of classification.

Article information

J. Appl. Math., Volume 2014, Special Issue (2013), Article ID 729763, 9 pages.

First available in Project Euclid: 1 October 2014

Permanent link to this document

Digital Object Identifier

Zentralblatt MATH identifier


Su, Ying-ying; Liang, Shan; Li, Jing-zhe; Deng, Xiao-gang; Li, Tai-fu; Zeng, Cheng. Nonlinear Fault Separation for Redundancy Process Variables Based on FNN in MKFDA Subspace. J. Appl. Math. 2014, Special Issue (2013), Article ID 729763, 9 pages. doi:10.1155/2014/729763.

Export citation


  • D. H. Lim, S. H. Lee, and M. G. Na, “Smart soft-sensing for the feedwater flowrate at PWRs using a GMDH algorithm,” IEEE Transactions on Nuclear Science, vol. 57, no. 1, pp. 340–347, 2010.
  • M. J. Brusco and D. Steinley, “Exact and approximate algorithms for variable selection in linear discriminant analysis,” Computational Statistics and Data Analysis, vol. 55, no. 1, pp. 123–131, 2011.
  • F. A. Michelsen, B. F. Lund, and I. J. Halvorsen, “Selection of optimal, controlled variables for the TEALARC LNG process,” Industrial and Engineering Chemistry Research, vol. 49, no. 18, pp. 8624–8632, 2010.
  • F. Cipollini and G. M. Gallo, “Automated variable selection invector multiplicative error models,” Computational Statistics and Data Analysis, vol. 54, no. 11, pp. 2470–2486, 2010.
  • A. J. Miller, Subset Selection in Regression, Chapman and Hall, London, UK, 2002.
  • R. L. Masion and R. F. Gunst, Statistical Design and Analysis of Experiments with Applications to Engineering and Science, John Wiley & Sons, Hoboken, NJ, USA, 2004.
  • Y. Yang and J. O. Pederson, “A Comparative study on feature sel-ection in text categorization,” in Proceedings of the 14th International Conference on Machine Learning, pp. 412–420, 1997.
  • K. Kira and L. A. Rendell, “The Feature selection problem: traditional methods and a new algorithm,” in Proceedings of the 9thNational Conference on Artificial Intelligence (AAAI '92), pp. 129–134, July 1992.
  • B. Pfahringer, “Compression-based feature subset selection,” in Proceedings of the Workshop on Data Engineering for Inductive Learning (IJCAI '95), pp. 101–106, 1995.
  • J. C. Isaac, Kernel methods and component analysis for pattern recognition [Ph.D. thesis], 2007.
  • J. C. Huang, J. S. Zhao, W. Sun, and Y. K. Ding, “PCA-based early fault diagnosis of solid waste incinerator,” Chemical Industry and Engineering Progress, vol. 25, no. 12, pp. 1489–1492, 2006.
  • S. Wold, M. Sjöström, and L. Eriksson, “PLS-regression: a basic tool of chemometrics,” Chemometrics and Intelligent Laboratory Systems, vol. 58, no. 2, pp. 109–130, 2001.
  • J. D. Wu, P. H. Chiang, Y. W. Chang, and Y. J. Shiao, “An expert system for fault diagnosis in internal combustion engines using probability neural network,” Expert Systems with Applications, vol. 34, no. 4, pp. 2704–2713, 2008.
  • D. F. Wang, S. J. Wang, and J. He, “Maintaining and fault removing on hydraulic system of CAK6140,” Machinery Design and Manufacture, vol. 7, pp. 161–162, 2010.
  • J. H. Li and P. L. Cui, “Improved kernel fisher discriminant analysis for fault diagnosis,” Expert Systems With Applications, vol. 36, no. 2, pp. 1423–1432, 2009.
  • M. Journée, Y. Nesterov, P. Richtárik, and R. Sepulchre, “Generalized power method for sparse principal component analysis,” Journal of Machine Learning Research, vol. 11, pp. 517–553, 2010.
  • K. Kim, J.-M. Lee, and I.-B. Lee, “A novel multivariate regression approach based on kernel partial least squares with orthogonal signal correction,” Chemometrics and Intelligent Laboratory Systems, vol. 79, no. 1-2, pp. 22–30, 2005.
  • R. Jenssen, “Kernel entropy component analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 5, pp. 847–860, 2010.
  • N. Otopal, “Restricted kernel canonical correlation analysis,” Linear Algebra and Its Applications, vol. 437, no. 1, pp. 1–13, 2012.
  • M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Computation, vol.15, no. 6, pp. 1373–1396, 2003.
  • H. Y. Wang and Z. H. Sheng, “Choice of the parameters for the phase space reconstruction of chaotic time series,” Journal of Southeast University, vol. 30, no. 5, pp. 113–117, 2000.
  • Z. B. Zhu and Z. H. Song, “A novel fault diagnosis system using pattern classification on kernel FDA čommentComment on ref. [22?]: This reference is a repetition of [15?]. Please check.subspace,” Expert Systems with Applications, vol. 38, no. 6, pp. 6895–6905, 2011.
  • S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K.-R. Muller, “Fisher discriminant analysis with kernels,” in Proceedings of the9th IEEE Workshop on Neural Networks for Signal Processing (NNSP'99), pp. 41–48, August 1999.
  • B. Schölkopf, A. Smola, and K.-R. Müller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural Computation, vol. 10, no. 5, pp. 1299–1319, 1998.
  • J. Y. Gan and Y. W. Zhang, “Generalized kernel fisher optimal discriminant in pattern recognition,” Pattern Recognition and Artificial Intelligence, vol. 15, no. 4, pp. 429–434, 2002.
  • J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Computers and Chemical Engineering, vol. 17, no. 3, pp. 245–255, 1993.
  • N. Lawrence Kicker, “Tennessee eastman,” 2013, \endinput