EconPapers    
Economics at your fingertips  
 

Optimum Policy Regions for Markov Processes with Discounting

Richard D. Smallwood
Additional contact information
Richard D. Smallwood: Stanford University, Stanford, California

Operations Research, 1966, vol. 14, issue 4, 658-669

Abstract: In many practical situations the discount factor for future rewards and costs is not known precisely. In the modeling of such situations, this is often reflected in a dependence of the optimum policy on the discount factor. We discuss this dependence of the optimum policy on discount factor for the class of finite-state, time-invariant, Markov models. A procedure is developed for finding the value of the discount factor for which we are indifferent between two policies. This is then extended to a discussion of how we can find the complete description of the optimum policy regions over any range of the discount factor. Two examples are presented.

Date: 1966
References: Add references at CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.14.4.658 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:14:y:1966:i:4:p:658-669

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:14:y:1966:i:4:p:658-669