Journal of Applied Mathematics

  • J. Appl. Math.
  • Volume 2014, Special Issue (2013), Article ID 706159, 6 pages.

A New Method of 3D Facial Expression Animation

Shuo Sun and Chunbao Ge

Full-text: Open access

Abstract

Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image) driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR), the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs) and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA), we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.

Article information

Source
J. Appl. Math., Volume 2014, Special Issue (2013), Article ID 706159, 6 pages.

Dates
First available in Project Euclid: 1 October 2014

Permanent link to this document
https://projecteuclid.org/euclid.jam/1412177534

Digital Object Identifier
doi:10.1155/2014/706159

Citation

Sun, Shuo; Ge, Chunbao. A New Method of 3D Facial Expression Animation. J. Appl. Math. 2014, Special Issue (2013), Article ID 706159, 6 pages. doi:10.1155/2014/706159. https://projecteuclid.org/euclid.jam/1412177534


Export citation

References

  • E. Cosatto, Sample-based talking-head synthesis [Ph.D. thesis], Swiss Federal Institute of Technology, 2002.
  • I. S. Pandzic, “Facial Animation Framework for the web and mobile platforms,” in Proceedings of the 7th International Conference on 3D Web Technology (Web3D '02), pp. 27–34, February 2002.
  • S. Kshirsagar, S. Garchery, and N. Magnenat-Thalmann, “Feature point based mesh deformation applied to MPEG-4 facial animation,” in Proceedings of the IFIP TC5/WG5.10 DEFORM'2000 Workshop and AVATARS'2000 Workshop on Deformable Avatars (DEFORM '00/AVATARS '00), pp. 24–34, Kluwer Academic Press, 2001.
  • F. Parke and K. Waters, Computer Facial Animation, A. K. Peters, Wellesley, Mass, USA, 1996.
  • “MPEG-4 Overview, ISO/IEC JTC1/SC29N2995,” 1999, http://web.itu.edu.tr/$\sim\,\!$pazarci/mpeg4/MPEG_Overview1_w219 6.htm.
  • Z. Liu, Y. Shan, and Z. Zhang, “Expressive expression mapping with ratio images,” in Proceedings of the Computer Graphics Annual Conference (SIGGRAPH '01), pp. 271–276, August 2001.
  • F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, “Synthesizing realistic facial expressions from photographs,” in Proceedings of the Annual Conference on Computer Graphics (SIGGRAPH '98), pp. 75–84, July 1998.
  • T. Ezzat and T. Poggio, “Facial analysis and synthesis using image-based models,” in Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition, pp. 116–120, October 1996.
  • Y. Chang, M. Vieira, M. Turk, and L. Velho, “Automatic 3D facial expression analysis in videos,” in Proceedings of the 2nd International Conference on Analysis and Modelling of Faces and Gestures (AMFG '05), 2005.
  • P.-H. Tu, I.-C. Lin, J.-S. Yeh, R.-H. Liang, and M. Ouhyung, “Expression detail for realistic facial animation,” in Proceeding of the Computer-Aided Design and Graphics (CAD '03), pp. 20–25, Macau, China, October 2003.
  • D.-L. Jiang, W. Gao, Z.-Q. Wang, and Y.-Q. Chen, “Realistic 3D facial animations with partial expression ratio image,” Chinese Journal of Computers, vol. 27, no. 6, pp. 750–757, 2004.
  • W. Zhu, Y. Chen, Y. Sun, B. Yin, and D. Jiang, “SVR-based facial texture driving for realistic expression synthesis,” in Proceedingsof the 3rd International Conference on Image and Graphics (ICIG '04), pp. 456–459, December 2004.
  • Y. Du and X. Lin, “Emotional facial expression model building,” Pattern Recognition Letters, vol. 24, no. 16, pp. 2923–2934, 2003.
  • W. U. Yuan, “An algorithm for parameterized expression čommentComment on ref. [16?]: Please update the information of this reference, if possible.mapping,” Application of Computer Research, Journal. In press.
  • D. Jiang, Z. Li, Z. Wang, and W. Gao, “Animating 3D facial models with MPEG-4 facedeftables,” in Proceedings of the 35th Annual Simulation Symposium, pp. 395–400, 2002.
  • V. N. Vapnik, Statistical Learning Theory, Adaptive and Learning Systems for Signal Processing, Communications, and Control, John Wiley & Sons, New York, NY, USA, 1998.
  • B. Scholkopf and A. Smola, Learning with Kernels, MIT Press, Cambridge, Mass, USA, 2002. \endinput