EconPapers    
Economics at your fingertips  
 

An Iterative Aggregation Procedure for Markov Decision Processes

Roy Mendelssohn
Additional contact information
Roy Mendelssohn: National Marine Fisheries Service, NOAA, Honolulu, Hawaii

Operations Research, 1982, vol. 30, issue 1, 62-73

Abstract: An iterative aggregation procedure is described for solving large scale, finite state, finite action Markov decision processes (MDPs). At each iteration, an aggregate master problem and a sequence of smaller subproblems are solved. The weights used to form the aggregate master problem are based on the estimates from the previous iteration. Each subproblem is a finite state, finite action MDP with a reduced state space and unequal row sums. Global convergence of the algorithm is proven under very weak assumptions. The proof relates this technique to other iterative methods that have been suggested for general linear programs.

Keywords: 116 finite state Markov decision processes; 637 linear programming: algorithms (search for similar items in EconPapers)
Date: 1982
References: Add references at CitEc
Citations: View citations in EconPapers (5)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.30.1.62 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:30:y:1982:i:1:p:62-73

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:30:y:1982:i:1:p:62-73