EconPapers    
Economics at your fingertips  
 

A Generalized Discrete Dynamic Programming Model

Richard C. Grinold
Additional contact information
Richard C. Grinold: University of California, Berkeley

Management Science, 1974, vol. 20, issue 7, 1092-1103

Abstract: This paper considers a stationary discrete dynamic programming model that is a generalization of the finite state and finite action Markov programming problem. We specify conditions under which an optimal stationary linear decision rule exists and show how this optimal policy can be calculated using linear programming, policy iteration, or value iteration. In addition we allow the parameters of the problem to be random variables and indicate when the expected values or these random variables are certainty equivalents.

Date: 1974
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/mnsc.20.7.1092 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ormnsc:v:20:y:1974:i:7:p:1092-1103

Access Statistics for this article

More articles in Management Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:ormnsc:v:20:y:1974:i:7:p:1092-1103