Journal of Applied Mathematics

Optimal Algorithms and the BFGS Updating Techniques for Solving Unconstrained Nonlinear Minimization Problems

Chein-Shan Liu

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

To solve an unconstrained nonlinear minimization problem, we propose an optimal algorithm (OA) as well as a globally optimal algorithm (GOA), by deflecting the gradient direction to the best descent direction at each iteration step, and with an optimal parameter being derived explicitly. An invariant manifold defined for the model problem in terms of a locally quadratic function is used to derive a purely iterative algorithm and the convergence is proven. Then, the rank-two updating techniques of BFGS are employed, which result in several novel algorithms as being faster than the steepest descent method (SDM) and the variable metric method (DFP). Six numerical examples are examined and compared with exact solutions, revealing that the new algorithms of OA, GOA, and the updated ones have superior computational efficiency and accuracy.

Article information

Source
J. Appl. Math., Volume 2014 (2014), Article ID 324181, 14 pages.

Dates
First available in Project Euclid: 2 March 2015

Permanent link to this document
https://projecteuclid.org/euclid.jam/1425305598

Digital Object Identifier
doi:10.1155/2014/324181

Mathematical Reviews number (MathSciNet)
MR3182365

Zentralblatt MATH identifier
07010598

Citation

Liu, Chein-Shan. Optimal Algorithms and the BFGS Updating Techniques for Solving Unconstrained Nonlinear Minimization Problems. J. Appl. Math. 2014 (2014), Article ID 324181, 14 pages. doi:10.1155/2014/324181. https://projecteuclid.org/euclid.jam/1425305598


Export citation

References

  • J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988.
  • M. Raydan, “On the Barzilai and Borwein choice of steplength for the gradient method,” IMA Journal of Numerical Analysis, vol. 13, no. 3, pp. 321–326, 1993.
  • M. Raydan, “The Barzilaiai and Borwein gradient method for the large scale unconstrained minimization problem,” SIAM Journal on Optimization, vol. 7, no. 1, pp. 26–33, 1997.
  • A. Friedlander, J. M. Martinez, B. Molina, and M. Raydan, “Gradient method with retards and generalizations,” SIAM Journal on Numerical Analysis, vol. 36, no. 1, pp. 275–289, 1999.
  • M. Raydan and B. F. Svaiter, “Relaxed steepest descent and Cauchy-Barzilai-Borwein method,” Computational Optimization and Applications, vol. 21, no. 2, pp. 155–167, 2002.
  • Y. H. Dai, J. Y. Yuan, and Y. Yuan, “Modified two-point stepsize gradient methods for unconstrained optimization,” Computational Optimization and Applications, vol. 22, no. 1, pp. 103–109, 2002.
  • Y. H. Dai and L.-H. Liao, “\emphR-linear convergence of the Barzilai and Borwein gradient method,” IMA Journal of Numerical Analysis, vol. 22, no. 1, pp. 1–10, 2002.
  • Y. H. Dai and Y. Yuan, “Alternate minimization gradient method,” IMA Journal of Numerical Analysis, vol. 23, no. 3, pp. 377–393, 2003.
  • R. Fletcher, “On the Barzilai-Borwein gradient method,” in Optimization and Control with Applications, L. Qi, K. Teo, and X. Yang, Eds., vol. 96 of Applied Optimization, pp. 235–256, Springer, New York, NY, USA, 2005.
  • Y.-X. Yuan, “A new stepsize for the steepest descent method,” Journal of Computational Mathematics, vol. 24, no. 2, pp. 149–156, 2006.
  • E. G. Birgin and J. M. Martinez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001.
  • N. Andrei, “Scaled conjugate gradient algorithms for unconstrained optimization,” Computational Optimization and Applications, vol. 38, no. 3, pp. 401–416, 2007.
  • N. Andrei, “A Dai-Yuan conjugate gradient algorithm with sufficient descent and conjugacy conditions for unconstrained optimization,” Applied Mathematics Letters, vol. 21, no. 2, pp. 165–171, 2008.
  • N. Andrei, “New accelerated conjugate gradient algorithms as a modification of Dai-Yuan's computational scheme for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 234, no. 12, pp. 3397–3410, 2010.
  • L. Zhang, “A new Liu-Storey type nonlinear conjugate gradient method for unconstrained optimization problems,” Journal of Computational and Applied Mathematics, vol. 225, no. 1, pp. 146–157, 2009.
  • S. Babaie-Kafaki, R. Ghanbari, and N. Mahdavi-Amiri, “Two new conjugate gradient methods based on modified secant equations,” Journal of Computational and Applied Mathematics, vol. 234, no. 5, pp. 1374–1386, 2010.
  • Z.-J. Shi and J. Guo, “A new family of conjugate gradient methods,” Journal of Computational and Applied Mathematics, vol. 224, no. 1, pp. 444–457, 2009.
  • W. C. Davidon, “Variable metric method for minimization,” AEC Research Development Report ANL-5990, 1959.
  • R. Fletcher and M. Powell, “A rapidly convergent descent method for minimization,” The Computer Journal, vol. 6, pp. 163–168, 1963.
  • C.-S. Liu, “A Jordan algebra and dynamic system with associator as vector field,” International Journal of Non-Linear Mechanics, vol. 35, no. 3, pp. 421–429, 2000.
  • C. G. Broyden, “The convergence of a class of double-rank minimization algorithms: 2. The new algorithm,” Journal of the Institute of Mathematics and its Applications, vol. 6, no. 3, pp. 222–231, 1970.
  • R. Fletcher, “A new approach to variable metric algorithms,” The Computer Journal, vol. 13, no. 3, pp. 317–322, 1970.
  • D. Goldfarb, “A family of variable-metric methods derived by variational means,” Mathematics of Computation, vol. 24, pp. 23–26, 1970.
  • D. F. Shanno, “Conditioning of quasi-Newton methods for function minimization,” Mathematics of Computation, vol. 24, pp. 647–656, 1970.
  • Y.-H. Dai, “Convergence properties of the BFGS algoritm,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 693–701, 2003.
  • Y.-H. Dai, “A perfect example for the BFGS method,” Mathematical Programming A, vol. 138, no. 1-2, pp. 501–530, 2013.
  • P. E. Gill, W. Murray, and P. A. Pitfield, “The implementation of two revised quasi-Newton algorithms for unconstrained optimization,” Tech. Rep. NAC-11, National Physical Laboratory, Teddington, UK, 1972.
  • H. H. Rosenbrock, “An automatic method for finding the greatest or least value of a function,” The Computer Journal, vol. 3, no. 3, pp. 175–184, 1960.
  • H.-C. Kuo, J.-R. Chang, and C.-H. Liu, “Particle swarm optimization for global optimization problems,” Journal of Marine Science and Technology, vol. 14, no. 3, pp. 170–181, 2006.
  • C.-S. Liu and S. N. Atluri, “A fictitious time integration method (FTIM) for solving mixed complementarity problems with applications to non-linear optimization,” Computer Modeling in Engineering & Sciences, vol. 34, no. 2, pp. 155–178, 2008.
  • P. Bouvry, F. Arbab, and F. Seredynski, “Distributed evolutionary optimization, in Manifold: Rosenbrock's function case study,” Information Sciences, vol. 122, no. 2–4, pp. 141–159, 2000.
  • S. Kok and C. Sandrock, “Locating and characterizing the stationary points of the extended Rosenbrock function,” Evolutionary Computation, vol. 17, no. 3, pp. 437–453, 2009.
  • M. J. D. Powell, “An iterative method for finding stationary values of a function of several variables,” The Computer Journal, vol. 5, no. 2, pp. 147–151, 1962. \endinput