Total Reward Variance in Discrete and Continuous Time Markov Chains
Karel Sladký () and
Nico M. Dijk ()
Additional contact information
Karel Sladký: Academy of Sciences of the Czech Republic
Nico M. Dijk: University of Amsterdam
A chapter in Operations Research Proceedings 2004, 2005, pp 319-326 from Springer
Abstract:
Abstract This note studies the variance of total cumulative rewards for Markov reward chains in both discrete and continuous time. It is shown that parallel results can be obtained for both cases. First, explicit formulae are presented for the variance within finite time. Next, the infinite time horizon is considered. Most notably, it is concluded that the variance has a linear growth rate. Explicit expressions are provided, related to the standard average reward case, to compute this growth rate.
Date: 2005
References: Add references at CitEc
Citations: View citations in EconPapers (2)
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:oprchp:978-3-540-27679-1_40
Ordering information: This item can be ordered from
http://www.springer.com/9783540276791
DOI: 10.1007/3-540-27679-3_40
Access Statistics for this chapter
More chapters in Operations Research Proceedings from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().