Technical Note—On the Convergence Rate of Stochastic Approximation for Gradient-Based Stochastic Optimization
Jiaqiao Hu () and
Michael C. Fu ()
Additional contact information
Jiaqiao Hu: Department of Applied Mathematics and Statistics, State University of New York at Stony Brook, Stony Brook, New York 11794
Michael C. Fu: Robert H. Smith School of Business & Institute for Systems Research, University of Maryland, College Park, Maryland 20742
Operations Research, 2025, vol. 73, issue 2, 1143-1150
Abstract:
We consider stochastic optimization via gradient-based search. Under a stochastic approximation framework, we apply a recently developed convergence rate analysis to provide a new finite-time error bound for a class of problems with convex differentiable structures. For noisy black-box functions, our main result allows us to derive finite-time bounds in the setting where the gradients are estimated via finite-difference estimators, including those based on randomized directions such as the simultaneous perturbation stochastic approximation algorithm. In particular, the convergence rate analysis sheds light on when it may be advantageous to use such randomized gradient estimates in terms of problem dimension and noise levels.
Keywords: Simulation; stochastic approximation; convergence rate; finite-time analysis; finite differences; random directions; simultaneous perturbation (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.2023.0055 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:73:y:2025:i:2:p:1143-1150
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().