Journal of Applied Mathematics

An Optimally Generalized Steepest-Descent Algorithm for Solving Ill-Posed Linear Systems

Chein-Shan Liu

Full-text: Open access

Abstract

It is known that the steepest-descent method converges normally at the first few iterations, and then it slows down. We modify the original steplength and descent direction by an optimization argument with the new steplength as being a merit function to be maximized. An optimal iterative algorithm with m-vector descent direction in a Krylov subspace is constructed, of which the m optimal weighting parameters are solved in closed-form to accelerate the convergence speed in solving ill-posed linear problems. The optimally generalized steepest-descent algorithm (OGSDA) is proven to be convergent with very fast convergence speed, accurate and robust against noisy disturbance, which is confirmed by numerical tests of some well-known ill-posed linear problems and linear inverse problems.

Article information

Source
J. Appl. Math., Volume 2013 (2013), Article ID 154358, 15 pages.

Dates
First available in Project Euclid: 14 March 2014

Permanent link to this document
https://projecteuclid.org/euclid.jam/1394808302

Digital Object Identifier
doi:10.1155/2013/154358

Mathematical Reviews number (MathSciNet)
MR3145010

Zentralblatt MATH identifier
06950536

Citation

Liu, Chein-Shan. An Optimally Generalized Steepest-Descent Algorithm for Solving Ill-Posed Linear Systems. J. Appl. Math. 2013 (2013), Article ID 154358, 15 pages. doi:10.1155/2013/154358. https://projecteuclid.org/euclid.jam/1394808302


Export citation

References

  • J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988.
  • M. Raydan, “On the barzilai and borwein choice of steplength for the gradient method,” Journal of Numerical Analysis, vol. 13, no. 3, pp. 321–326, 1993.
  • M. Raydan, “The Barzilaiai and Borwein gradient method for the large scale unconstrained minimization problem,” Journal on Optimization, vol. 7, no. 1, pp. 26–33, 1997.
  • A. Friedlander, J. M. Martínez, B. Molina, and M. Raydan, “Gradient method with retards and generalizations,” Journal on Numerical Analysis, vol. 36, no. 1, pp. 275–289, 1999.
  • M. Raydan and B. F. Svaiter, “Relaxed steepest descent and Cau-chy-Barzilai-Borwein method,” Computational Optimization and Applications, vol. 21, no. 2, pp. 155–167, 2002.
  • Y. Dai, J. Yuan, and Y.-X. Yuan, “Modified two-point stepsize gradient methods for unconstrained optimization,” Computational Optimization and Applications, vol. 22, no. 1, pp. 103–109, 2002.
  • Y.-H. Dai and L.-Z. Liao, “R-linear convergence of the Barzilai and Borwein gradient method,” Journal of Numerical Analysis, vol. 22, no. 1, pp. 1–10, 2002.
  • Y.-H. Dai and Y.-X. Yuan, “Alternate minimization gradient method,” Journal of Numerical Analysis, vol. 23, no. 3, pp. 377–393, 2003.
  • R. Fletcher, “On the Barzilai-Borwein Method,” in Optimization and Control with Applications, L. Qi, K. Teo, and X. Yang, Eds., vol. 96 of Applied Optimization, pp. 235–256, Springer, New York, NY, USA, 2005.
  • Y.-X. Yuan, “A new stepsize for the steepest descent method,” Journal of Computational Mathematics, vol. 24, no. 2, pp. 149–156, 2006.
  • A. Bhaya and E. Kaszkurewicz, Control Perspectives on Numerical Algorithms and Matrix Problems, Advances in Design and Control, Book 10, SIAM, Philadelphia, Pa, USA, 2006.
  • U. Helmke and J. B. Moore, Optimization and Dynamical Systems, Springer, Berlin, Germany, 1994.
  • M. K. Gavurin, “Nonlinear functional equations and continuous analogs of iterative methods,” Izvestiya Vysshikh Uchebnykh Zavedenii Matematika, vol. 5, pp. 18–31, 1958.
  • Y. I. Alber, “Continuous processes of the Newton type,” Differential Equations, vol. 7, pp. 1461–1471, 1971.
  • M. W. Hirsch and S. Smale, “On algorithms for solving $f(x)=0$,” Communications on Pure and Applied Mathematics, vol. 32, pp. 281–312, 1979.
  • M. T. Chu, “On the continuous realization of iterative processes,” SIAM Review, vol. 30, no. 3, pp. 375–387, 1988.
  • I. M. Ortega and W. C. Rheinboldt, Iterative Solutions of Non-linear Equations in Several Variables, Academic Press, New York, NY, USA, 1970.
  • A. Bhaya and E. Kaszkurewicz, “Steepest descent with momentum for quadratic functions is a version of the conjugate gra-dient method,” Neural Networks, vol. 17, no. 1, pp. 65–71, 2004.
  • A. Bhaya and E. Kaszkurewicz, “A control-theoretic approach to the design of zero finding numerical methods,” IEEE Transactions on Automatic Control, vol. 52, no. 6, pp. 1014–1026, 2007.
  • C.-S. Liu, “A state feedback controller used to solve an ill-posed linear system by a $GL(n,\mathbb{R})$ iterative algorithm,” Communications in Numerical Analysis, vol. 2013, Article ID cna-00181, 22 pages, 2013.
  • U. M. Ascher, K. van den Doel, H. Huang, and B. F. Svaiter, “Gradient descent and fast artificial time integration,” Mathematical Modelling and Numerical Analysis, vol. 43, no. 4, pp. 689–708, 2009.
  • C.-S. Liu, “A revision of relaxed steepest descent method from the dynamics on an invariant manifold,” Computer Modeling in Engineering and Sciences, vol. 80, no. 1, pp. 57–86, 2011.
  • C.-S. Liu, “Modifications of steepest descent method and conjugate gradient method against noise for ill-posed linear systems,” Communications in Numerical Analysis, vol. 2012, Article ID cna-00115, 24 pages, 2012.
  • Y.-H. Dai, W. W. Hager, K. Schittkowski, and H. Zhang, “The cyclic Barzilai-Borwein method for unconstrained optimization,” Journal of Numerical Analysis, vol. 26, no. 3, pp. 604–627, 2006.
  • G. Frassoldati, L. Zanni, and G. Zanghirati, “New adaptive stepsize selections in gradient methods,” Journal of Industrial and Management Optimization, vol. 4, no. 2, pp. 299–312,2008.
  • W. W. Hager, B. A. Mair, and H. Zhang, “An affine-scaling inte-rior-point CBB method for box-constrained optimization,” Mathematical Programming, vol. 119, no. 1, pp. 1–32, 2009.
  • S. Bonettini, R. Zanella, and L. Zanni, “A scaled gradient proje-ction method for constrained image deblurring,” Inverse Problems, vol. 25, no. 1, Article ID 015002, 2009.
  • G. Yu, L. Qi, and Y. Dai, “On nonmonotone chambolle gradient projection algorithms for total variation image restoration,” Journal of Mathematical Imaging and Vision, vol. 35, no. 2, pp. 143–154, 2009.
  • T. Lukić, J. Lindblad, and N. Sladoje, “Regularized image deno-ising based on spectral gradient optimization,” Inverse Problems, vol. 27, no. 8, Article ID 085010, 2011.
  • S. Setzer, G. Steidl, and J. Morgenthaler, “A cyclic projected gradient method,” Computational Optimization and Applications, vol. 54, no. 2, pp. 417–440, 2013.
  • J. Dongarra and F. Sullivan, “Guest editors introduction to the top 10 algorithms,” Computing in Science & Engineering, vol. 2, no. 1, pp. 22–23, 2000.
  • Y. Saad, “Krylov subspace methods for solving large unsymmetric linear systems,” Mathematics of Computation, vol. 37, no. 155, pp. 105–126, 1981.
  • R. W. Freund and N. M. Nachtigal, “QMR: a quasi-minimal residual method for non-Hermitian linear systems,” Numerische Mathematik, vol. 60, no. 1, pp. 315–339, 1991.
  • J. Van Den Eshof and G. L. G. Sleijpen, “Inexact Krylov subspace methods for linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 1, pp. 125–153, 2005.
  • C.-S. Liu, “An optimal multi-vector iterative algorithm in a Krylov subspace for solving the ill-posed linear inverse problems,” Computers Materials & Continua, vol. 33, no. 2, pp. 175–198, 2013.
  • V. Simoncini and D. B. Szyld, “Recent computational developments in Krylov subspace methods for linear systems,” Nume-rical Linear Algebra with Applications, vol. 14, no. 1, pp. 1–59, 2007.
  • Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” SIAM Journal on Scientific and Statistical Computing, vol. 7, no. 3, pp. 856–869, 1986.
  • Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, Pa, USA, 2nd edition, 2003.
  • C.-S. Liu, H.-K. Hong, and S. N. Atluri, “Novel algorithms based on the conjugate gradient method for inverting ill-conditioned matrices, and a new regularization method to solve ill-posed linear systems,” Computer Modeling in Engineering and Sciences, vol. 60, no. 3, pp. 279–308, 2010.
  • C.-S. Liu, “An optimal tri-vector iterative algorithm for solving ill-posed linear inverse problems,” Inverse Problems in Science and Engineering, vol. 21, no. 4, pp. 650–681, 2013.
  • C.-S. Liu, “A dynamical Tikhonov regularization for solving ill-posed linear algebraic systems,” Acta Applicandae Mathematicae, vol. 123, no. 1, pp. 285–307, 2013.
  • Y. C. Hon and M. Li, “A discrepancy principle for the source points location in using the MFS for solving the BHCP,” Inte-rnational Journal of Computational Methods, vol. 6, no. 2, pp. 181–197, 2009.
  • C.-S. Liu, “The method of fundamental solutions for solving the backward heat conduction problem with conditioning by a new post-conditioner,” Numerical Heat Transfer, Part B, vol. 60, no. 1, pp. 57–72, 2011. \endinput