Discrete Gradient Methods
Adil Bagirov (),
Napsu Karmitsa () and
Marko M. Mäkelä ()
Additional contact information
Adil Bagirov: School of Information Technology and Mathematical Sciences, University of Ballarat
Napsu Karmitsa: University of Turku
Marko M. Mäkelä: University of Turku
Chapter Chapter 15 in Introduction to Nonsmooth Optimization, 2014, pp 327-333 from Springer
Abstract:
Abstract In this chapter, we introduce two discrete gradient methods that can be considered as semi-derivative free methods in a sense that they do not use subgradient information and they do not approximate the subgradient but at the end of the solution process (i.e., near the optimal point). The introduced methods are the original discrete gradient method for small-scale nonsmooth optimization and its limited memory bundle version the limited memory discrete gradient bundle method for medium- and semi-large problems.
Keywords: Search Direction; Outer Iteration; Descent Direction; Iteration Point; Bundle Method (search for similar items in EconPapers)
Date: 2014
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-319-08114-4_15
Ordering information: This item can be ordered from
http://www.springer.com/9783319081144
DOI: 10.1007/978-3-319-08114-4_15
Access Statistics for this chapter
More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().