EconPapers    
Economics at your fingertips  
 

Markov: A methodology for the solution of infinite time horizon markov decision processes

Byron K. Williams

Applied Stochastic Models and Data Analysis, 1988, vol. 4, issue 4, 253-271

Abstract: Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value‐improvement and policy‐improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.

Date: 1988
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://doi.org/10.1002/asm.3150040405

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:wly:apsmda:v:4:y:1988:i:4:p:253-271

Access Statistics for this article

More articles in Applied Stochastic Models and Data Analysis from John Wiley & Sons
Bibliographic data for series maintained by Wiley Content Delivery ().

 
Page updated 2025-03-20
Handle: RePEc:wly:apsmda:v:4:y:1988:i:4:p:253-271