A Flocking-Based Approach for Distributed Stochastic Optimization
Shi Pu () and
Alfredo Garcia ()
Additional contact information
Shi Pu: School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona 85281
Alfredo Garcia: Department of Industrial and Systems Engineering, Texas A&M University, College Station, Texas 77843
Operations Research, 2018, vol. 66, issue 1, 267-281
Abstract:
In recent years, the paradigm of cloud computing has emerged as an architecture for computing that makes use of distributed (networked) computing resources. In this paper, we consider a distributed computing algorithmic scheme for stochastic optimization, which relies on modest communication requirements among processors and most importantly, does not require synchronization. Specifically, we analyze a scheme with N > 1 independent threads each implementing a stochastic gradient algorithm. The threads are coupled via a perturbation of the gradient (with attractive and repulsive forces) in a similar manner to mathematical models of flocking, swarming, and other group formations found in nature with mild communication requirements. When the objective function is convex, we show that a flocking-like approach for distributed stochastic optimization provides a noise reduction effect similar to that of a centralized stochastic gradient algorithm based upon the average of N gradient samples at each step. The distributed nature of flocking makes it an appealing computational alternative. We show that when the overhead related to the time needed to gather N samples and synchronization is not negligible, the flocking implementation outperforms a centralized stochastic gradient algorithm based upon the average of N gradient samples at each step. When the objective function is not convex, the flocking-based approach seems better suited to escape locally optimal solutions due to the repulsive force that enforces a certain level of diversity in the set of candidate solutions. Here again, we show that the noise reduction effect is similar to that associated to the centralized stochastic gradient algorithm based upon the average of N gradient samples at each step.
Keywords: stochastic optimization; distributed optimization; flocking (search for similar items in EconPapers)
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1287/opre.2017.1666 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:66:y:2018:i:1:p:267-281
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().