EconPapers    
Economics at your fingertips  
 

Myopic Solutions of Markov Decision Processes and Stochastic Games

Matthew J. Sobel
Additional contact information
Matthew J. Sobel: Georgia Institute of Technology, Atlanta, Georgia

Operations Research, 1981, vol. 29, issue 5, 995-1009

Abstract: Sufficient conditions are presented for a Markov decision process to have a myopic optimum and for a stochastic game to possess a myopic equilibrium point. An optimum (or an equilibrium point) is said to be “myopic” if it can be deduced from an optimum (or an equilibrium point) of a static optimization problem (or a static [Nash] game). The principal conditions are (a) each single period reward is the sum of terms due to the current state and action, (b) each transition probability depends on the action taken but not on the state from which the transition occurs, and (c) an appropriate static optimum (or equilibrium point) is ad infinitum repeatable. These conditions are satisfied by several dynamic oligopoly models and numerous Markov decision processes.

Keywords: 116 myopic optima in Markov decision processes; 236 myopic equilibrium points in stochastic games (search for similar items in EconPapers)
Date: 1981
References: Add references at CitEc
Citations: View citations in EconPapers (26)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.29.5.995 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:29:y:1981:i:5:p:995-1009

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:29:y:1981:i:5:p:995-1009