Technical Note: On Ordinal Comparison of Policies in Markov Reward Processes
H. S. Chang
Additional contact information
H. S. Chang: Sogang University
Journal of Optimization Theory and Applications, 2004, vol. 122, issue 1, No 9, 207-217
Abstract:
Abstract An asymptotic exponential convergence rate of ordinal comparison from large deviations theory is well known for selecting the true best solution from the candidate solutions sample means. This note supplements the theories developed by Dai within the framework of ergodic Markov reward processes for ε-ordinal comparison of policies, establishing an asymptotic exponential convergence rate for the infinite-horizon average criterion.
Keywords: Ordinal comparisons; large deviations; stochastic simulations; Markov reward processes (search for similar items in EconPapers)
Date: 2004
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1023/B:JOTA.0000041736.82051.f1 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:joptap:v:122:y:2004:i:1:d:10.1023_b:jota.0000041736.82051.f1
Ordering information: This journal article can be ordered from
http://www.springer. ... cs/journal/10957/PS2
DOI: 10.1023/B:JOTA.0000041736.82051.f1
Access Statistics for this article
Journal of Optimization Theory and Applications is currently edited by Franco Giannessi and David G. Hull
More articles in Journal of Optimization Theory and Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().