EconPapers    
Economics at your fingertips  
 

Asymptotic Optimality and Rates of Convergence of Quantized Stationary Policies in Continuous-Time Markov Decision Processes

Xiao Wu, Yanqiu Tang and Luca Pancioni

Discrete Dynamics in Nature and Society, 2022, vol. 2022, 1-11

Abstract: This paper is concerned with the asymptotic optimality of quantized stationary policies for continuous-time Markov decision processes (CTMDPs) in Polish spaces with state-dependent discount factors, where the transition rates and reward rates are allowed to be unbounded. Using the dynamic programming approach, we first establish the discounted optimal equation and the existence of its solutions. Then, we obtain the existence of optimal deterministic stationary policies under suitable conditions by more concise proofs. Furthermore, we discretize and incentivize the action space and construct a sequence of quantizer policies, which is the approximation of the optimal stationary policies of the CTMDPs, and get the approximation result and the rates of convergence on the expected discounted rewards of the quantized stationary policies. Also, we give an iteration algorithm on the approximate optimal policies. Finally, we give an example to illustrate the asymptotic optimality.

Date: 2022
References: Add references at CitEc
Citations:

Downloads: (external link)
http://downloads.hindawi.com/journals/ddns/2022/1080946.pdf (application/pdf)
http://downloads.hindawi.com/journals/ddns/2022/1080946.xml (application/xml)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hin:jnddns:1080946

DOI: 10.1155/2022/1080946

Access Statistics for this article

More articles in Discrete Dynamics in Nature and Society from Hindawi
Bibliographic data for series maintained by Mohamed Abdelhakeem ().

 
Page updated 2025-03-19
Handle: RePEc:hin:jnddns:1080946