Open Access
June 2006 A memory gradient method without line search for unconstrained optimization
Yasushi Narushima
Author Affiliations +
SUT J. Math. 42(2): 191-206 (June 2006). DOI: 10.55937/sut/1173205671

Abstract

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.

Acknowledgements

The author would like to thank the referee for valuable comments. The author is grateful to Professor Hiroshi Yabe of Tokyo University of Science for his valuable advice and encouragement. The author would like to thank Dr. Hideho Ogasawara of Tokyo University of Science for valuable comments.

Citation

Download Citation

Yasushi Narushima. "A memory gradient method without line search for unconstrained optimization." SUT J. Math. 42 (2) 191 - 206, June 2006. https://doi.org/10.55937/sut/1173205671

Information

Received: 24 July 2006; Published: June 2006
First available in Project Euclid: 18 June 2022

Digital Object Identifier: 10.55937/sut/1173205671

Subjects:
Primary: 65K05 , 90C06 , 90C30

Keywords: global convergence , large scale problems , memory gradient method , nonlinear programming , optimization

Rights: Copyright © 2006 Tokyo University of Science

Vol.42 • No. 2 • June 2006
Back to Top