EconPapers    
Economics at your fingertips  
 

Finite Difference Gradient Approximation: To Randomize or Not?

Katya Scheinberg ()
Additional contact information
Katya Scheinberg: School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853

INFORMS Journal on Computing, 2022, vol. 34, issue 5, 2384-2388

Abstract: We discuss two classes of methods of approximating gradients of noisy black box functions—the classical finite difference method and recently popular randomized finite difference methods. Despite of the popularity of the latter, we argue that it is unclear whether the randomized schemes have an advantage over the traditional methods when employed inside an optimization method. We point to theoretical and practical evidence that show that the opposite is true at least in a general optimization setting. We then pose the question of whether a particular setting exists when the advantage of the new method may be clearly shown, at least numerically. The larger underlying challenge is a development of black box optimization methods that scale well with the problem dimension.

Keywords: finite difference approximation; gradient descent; randomized (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://dx.doi.org/10.1287/ijoc.2022.1218 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:34:y:2022:i:5:p:2384-2388

Access Statistics for this article

More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:orijoc:v:34:y:2022:i:5:p:2384-2388