A Distributed Conjugate Gradient Online Learning Method over Networks
Cuixia Xu,
Junlong Zhu,
Youlin Shang and
Qingtao Wu
Complexity, 2020, vol. 2020, 1-13
Abstract:
In a distributed online optimization problem with a convex constrained set over an undirected multiagent network, the local objective functions are convex and vary over time. Most of the existing methods used to solve this problem are based on the fastest gradient descent method. However, the convergence speed of these methods is decreased with an increase in the number of iterations. To accelerate the convergence speed of the algorithm, we present a distributed online conjugate gradient algorithm, different from a gradient method, in which the search directions are a set of vectors that are conjugated to each other and the step sizes are obtained through an accurate line search. We analyzed the convergence of the algorithm theoretically and obtained a regret bound of , where T is the number of iterations. Finally, numerical experiments conducted on a sensor network demonstrate the performance of the proposed algorithm.
Date: 2020
References: Add references at CitEc
Citations:
Downloads: (external link)
http://downloads.hindawi.com/journals/8503/2020/1390963.pdf (application/pdf)
http://downloads.hindawi.com/journals/8503/2020/1390963.xml (text/xml)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hin:complx:1390963
DOI: 10.1155/2020/1390963
Access Statistics for this article
More articles in Complexity from Hindawi
Bibliographic data for series maintained by Mohamed Abdelhakeem ().