EconPapers    
Economics at your fingertips  
 

Delaunay-based derivative-free optimization via global surrogates. Part III: nonconvex constraints

Ryan Alimo (), Pooriya Beyhaghi () and Thomas R. Bewley ()
Additional contact information
Ryan Alimo: UC San Diego
Pooriya Beyhaghi: UC San Diego
Thomas R. Bewley: UC San Diego

Journal of Global Optimization, 2020, vol. 77, issue 4, No 3, 743-776

Abstract: Abstract This paper introduces a Delaunay-based derivative-free optimization algorithm, dubbed $$\varDelta $$ Δ -DOGS( $$\varOmega $$ Ω ), for problems with both (a) a nonconvex, computationally expensive objective function f(x), and (b) nonlinear, computationally expensive constraint functions $$c_\ell (x)$$ c ℓ ( x ) which, taken together, define a nonconvex, possibly even disconnected feasible domain $$\varOmega $$ Ω , which is assumed to lie within a known rectangular search domain $$\varOmega _s$$ Ω s , everywhere within which the f(x) and $$c_\ell (x)$$ c ℓ ( x ) may be evaluated. Approximations of both the objective function f(x) as well as the feasible domain $$\varOmega $$ Ω are developed and refined as the iterations proceed. The approach is practically limited to the problems with less than about ten adjustable parameters. The work is an extension of our original Delaunay-based optimization algorithm (see JOGO DOI: 10.1007/s10898-015-0384-2), and inherits many of the constructions and strengths of that algorithm, including: (1) a surrogate function p(x) interpolating all existing function evaluations and summarizing their trends, (2) a synthetic, piecewise-quadratic uncertainty function e(x) built on the framework of a Delaunay triangulation amongst existing datapoints, (3) a tunable balance between global exploration (large K) and local refinement (small K), (4) provable global convergence for a sufficiently large K, under the assumption that the objective and constraint functions are twice differentiable with bounded Hessians, (5) an Adaptive-K variant of the algorithm that efficiently tunes K automatically based on a target value of the objective function, and (6) remarkably fast global convergence on a variety of benchmark problems.

Keywords: Response surface methods; Nonconvex constraints (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10898-019-00854-2 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:jglopt:v:77:y:2020:i:4:d:10.1007_s10898-019-00854-2

Ordering information: This journal article can be ordered from
http://www.springer. ... search/journal/10898

DOI: 10.1007/s10898-019-00854-2

Access Statistics for this article

Journal of Global Optimization is currently edited by Sergiy Butenko

More articles in Journal of Global Optimization from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:jglopt:v:77:y:2020:i:4:d:10.1007_s10898-019-00854-2