EconPapers    
Economics at your fingertips  
 

Integrated Online Learning and Adaptive Control in Queueing Systems with Uncertain Payoffs

Wei-Kang Hsu (), Jiaming Xu (), Xiaojun Lin () and Mark R. Bell ()
Additional contact information
Wei-Kang Hsu: School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907
Jiaming Xu: The Fuqua School of Business, Duke University, Durham, North Carolina 27708
Xiaojun Lin: School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907
Mark R. Bell: School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907

Operations Research, 2022, vol. 70, issue 2, 1166-1181

Abstract: We study task assignment in online service platforms, where unlabeled clients arrive according to a stochastic process and each client brings a random number of tasks. As tasks are assigned to servers, they produce client/server-dependent random payoffs. The goal of the system operator is to maximize the expected payoff per unit time subject to the servers’ capacity constraints. However, both the statistics of the dynamic client population and the client-specific payoff vectors are unknown to the operator. Thus, the operator must design task-assignment policies that integrate adaptive control (of the queueing system) with online learning (of the clients’ payoff vectors). A key challenge in such integration is how to account for the nontrivial closed-loop interactions between the queueing process and the learning process, which may significantly degrade system performance. We propose a new utility-guided online learning and task assignment algorithm that seamlessly integrates learning with control to address such difficulty. Our analysis shows that, compared with an oracle that knows all client dynamics and payoff vectors beforehand, the gap of the expected payoff per unit time of our proposed algorithm can be analytically bounded by three terms, which separately capture the impact of the client-dynamic uncertainty, client-server payoff uncertainty, and the loss incurred by backlogged clients in the system. Further, our bound holds for any finite time horizon. Through simulations, we show that our proposed algorithm significantly outperforms a myopic-matching policy and a standard queue-length-based policy that does not explicitly address the closed-loop interactions between queueing and learning.

Keywords: Stochastic Models; online learning; online service platforms; convex optimization; decentralized algorithms (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/opre.2021.2100 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:70:y:2022:i:2:p:1166-1181

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:70:y:2022:i:2:p:1166-1181