Technical Note—A Markov Chain Partitioning Algorithm for Computing Steady State Probabilities
Theodore J. Sheskin
Additional contact information
Theodore J. Sheskin: Cleveland State University, Cleveland, Ohio
Operations Research, 1985, vol. 33, issue 1, 228-235
Abstract:
This paper presents a partitioning algorithm for recursively computing the steady state probabilities for a finite, irreducible Markov chain or a Markov process. The algorithm contains a matrix reduction routine, followed by a vector enlargement routine. The matrix reduction routine repeatedly partitions the transition matrix for the Markov chain, creating a sequence of smaller, reduced transition matrices. The vector enlargement routine computes the components of the steady state probability vector by starting with the smallest reduced matrix and working sequentially toward the original transition matrix. This procedure produces an exact solution for the steady state probabilities. No special structure is required for the Markov chain. In theory, the procedure imposes no limit on the size of the largest Markov chain to which the partitioning algorithm can be applied. In practice, roundoff errors may require modifications to the method.
Keywords: 567; partitioning; algorithm; for; steady; state; probabilities (search for similar items in EconPapers)
Date: 1985
References: Add references at CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
http://dx.doi.org/10.1287/opre.33.1.228 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:33:y:1985:i:1:p:228-235
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().