A Method to Calculate Steady-State Distributions of Large Markov Chains by Aggregating States
Brion N. Feinberg and
Samuel S. Chiu
Additional contact information
Brion N. Feinberg: Stanford University, Stanford, California
Samuel S. Chiu: Stanford University, Stanford, California
Operations Research, 1987, vol. 35, issue 2, 282-290
Abstract:
This paper develops an efficient iterative algorithm to calculate the steady-state distribution of nearly all irreducible discrete-time Markov chains. Computational experiences suggest that, for large Markovian systems (more than 130 states), the proposed algorithm can be ten times faster than standard Gaussian elimination in finding solutions to an accuracy of 0.1%. The proposed algorithm is developed in three stages. First, we develop a very efficient algorithm for determining steady-state distributions of a restricted class of Markovian systems. A second result establishes a relationship between a general irreducible Markovian system and a system in the restricted class of Markovian systems. Finally, we combine the two results to produce an efficient, iterative algorithm to solve Markov systems. The paper concludes with a discussion of the observed performance of the algorithm.
Keywords: 481 network/graphs: aggregation of states in a Markov chain; 567 steady-state distribution of Markov chain (search for similar items in EconPapers)
Date: 1987
References: Add references at CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
http://dx.doi.org/10.1287/opre.35.2.282 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:35:y:1987:i:2:p:282-290
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().