EconPapers    
Economics at your fingertips  
 

Robust Power Management via Learning and Game Design

Zhengyuan Zhou (), Panayotis Mertikopoulos (), Aris L. Moustakas (), Nicholas Bambos () and Peter Glynn ()
Additional contact information
Zhengyuan Zhou: Department of Technology, Operations, and Statistics, Stern School of Business, New York University, New York, New York 10012
Panayotis Mertikopoulos: University Grenoble Alpes, CNRS, Grenoble INP, Inria, LIG, F-38000 Grenoble, France
Aris L. Moustakas: Department of Physics, University of Athens, Athens, Greece
Nicholas Bambos: Department of Management Science and Engineering, Stanford University, Stanford, California 94305
Peter Glynn: Department of Management Science and Engineering, Stanford University, Stanford, California 94305

Operations Research, 2021, vol. 69, issue 1, 331-345

Abstract: We consider the target-rate power management problem for wireless networks; and we propose two simple, distributed power management schemes that regulate power in a provably robust manner by efficiently leveraging past information. Both schemes are obtained via a combined approach of learning and “game design” where we (1) design a game with suitable payoff functions such that the optimal joint power profile in the original power management problem is the unique Nash equilibrium of the designed game; (2) derive distributed power management algorithms by directing the networks’ users to employ a no-regret learning algorithm to maximize their individual utility over time. To establish convergence, we focus on the well-known online eager gradient descent learning algorithm in the class of weighted strongly monotone games. In this class of games, we show that when players only have access to imperfect stochastic feedback, multiagent online eager gradient descent converges to the unique Nash equilibrium in mean square at a O ( 1 T ) rate. In the context of power management in static networks, we show that the designed games are weighted strongly monotone if the network is feasible (i.e., when all users can concurrently attain their target rates). This allows us to derive a geometric convergence rate to the joint optimal transmission power. More importantly, in stochastic networks where channel quality fluctuates over time, the designed games are also weighted strongly monotone and the proposed algorithms converge in mean square to the joint optimal transmission power at a O ( 1 T ) rate, even when the network is only feasible on average (i.e., users may be unable to meet their requirements with positive probability). This comes in stark contrast to existing algorithms (like the seminal Foschini–Miljanic algorithm and its variants) that may fail to converge altogether.

Keywords: power management; wireless network; online learning; Nash equilibrium (search for similar items in EconPapers)
Date: 2021
References: Add references at CitEc
Citations:

Downloads: (external link)
https://doi.org/10.1287/opre.2020.1996 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:69:y:2021:i:1:p:331-345

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:69:y:2021:i:1:p:331-345