EconPapers    
Economics at your fingertips  
 

Learning optimal admission control in partially observable queueing networks

Jonatha Anselmi (), Bruno Gaujal () and Louis-Sébastien Rebuffi ()
Additional contact information
Jonatha Anselmi: Univ. Grenoble Alpes
Bruno Gaujal: Univ. Grenoble Alpes
Louis-Sébastien Rebuffi: Univ. Grenoble Alpes

Queueing Systems: Theory and Applications, 2024, vol. 108, issue 1, No 2, 79 pages

Abstract: Abstract We develop an efficient reinforcement learning algorithm that learns the optimal admission control policy in a partially observable queueing network. Specifically, only the arrival and departure times from the network are observable, optimality refers to the average holding/rejection cost in infinite horizon, and efficiency is with respect to regret performance. While reinforcement learning in partially-observable Markov Decision Processes (MDP) is prohibitively expensive in general, we show that the regret at time T induced by our algorithm is $${\tilde{O}} \left( \sqrt{T \log (1/\rho )}\right) $$ O ~ T log ( 1 / ρ ) where $$\rho \in (0,1)$$ ρ ∈ ( 0 , 1 ) is connected to the mixing time of the underlying MDP. In contrast with existing regret bounds, ours does not depend on the diameter (D) of the underlying MDP, which in most queueing systems is at least exponential in S, i.e., the maximal number of jobs in the network. Instead, the role of the diameter is played by the $$\log (1/\rho )$$ log ( 1 / ρ ) term, which may depend on S but we find that such dependence is “minimal”. In the case of acyclic or hyperstable queueing networks, we prove that $$\log (1/\rho )=O(S)$$ log ( 1 / ρ ) = O ( S ) , which overall provides a regret bound of the order of $${\tilde{O}} \left( \sqrt{T S}\right) $$ O ~ TS . In the general case, numerical simulations support the claim that the term $$\log (1/\rho )$$ log ( 1 / ρ ) remains extremely small compared to the diameter. The novelty of our approach is to leverage Norton’s theorem for queueing networks and an efficient reinforcement learning algorithm for MDPs with the structure of birth-and-death processes.

Keywords: Product-form queueing networks; Norton’s theorem; Admission control; Reinforcement learning; Regret; 60K25; 60J10; 68M20 (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s11134-024-09917-y Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:queues:v:108:y:2024:i:1:d:10.1007_s11134-024-09917-y

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/11134/

DOI: 10.1007/s11134-024-09917-y

Access Statistics for this article

Queueing Systems: Theory and Applications is currently edited by Sergey Foss

More articles in Queueing Systems: Theory and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:queues:v:108:y:2024:i:1:d:10.1007_s11134-024-09917-y