EconPapers    
Economics at your fingertips  
 

Numerical Methods of Optimization

Jean-Pierre Corriou
Additional contact information
Jean-Pierre Corriou: University of Lorraine

Chapter Chapter 9 in Numerical Methods and Optimization, 2021, pp 505-574 from Springer

Abstract: Abstract The numerical methods of optimization start with optimizing functions of one variable, bisection, Fibonacci, and Newton. Then, functions of several variables occupy the main part, divided into methods of direct search and gradient methods. In the direct search, many methods are presented, simplex, Hooke and Jeeves, Powell, Rosenbrock, Nelder–Mead, Box complex, genetic algorithms with quasi-global optimization. Gradient methods are first explained from a general point of view for quadratic and non-quadratic functions, including the method of steepest descent, conjugate gradients, Newton–Raphson, quasi-Newton, Gauss–Newton, and Levenberg–Marquardt. Solving large systems is discussed. All these methods are illustrated by significant numerical examples.

Date: 2021
References: Add references at CitEc
Citations: View citations in EconPapers (2)

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:spochp:978-3-030-89366-8_9

Ordering information: This item can be ordered from
http://www.springer.com/9783030893668

DOI: 10.1007/978-3-030-89366-8_9

Access Statistics for this chapter

More chapters in Springer Optimization and Its Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:spochp:978-3-030-89366-8_9