EconPapers    
Economics at your fingertips  
 

Technical Note—Markov Decision Processes with State-Information Lag

D. M. Brooks and C. T. Leondes
Additional contact information
D. M. Brooks: Sperry Rand Corporation, Bay St. Louis, Mississippi
C. T. Leondes: University of California, Los Angeles, California

Operations Research, 1972, vol. 20, issue 4, 904-907

Abstract: The Markov-decision-process formulation provides a method for selecting the optimal policy in a process where changes of state are Markovian, but assumes perfect information as to process state at each stage of the process. Where the available observations of the actual process state provide imperfect state information, the Markov-decision-process approach is applicable only if the observed state changes in a Markovian fashion. Although this is not true in the general case, it does apply in the important special case where information about the physical state becomes available after a delay of one transition or stage. This information-lag process can be analyzed as a Markov decision process. The degradation in gain, or expected return per unit time, from that of the perfect-information process provides a measure of the potential value of improving the information system.

Date: 1972
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/opre.20.4.904 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:20:y:1972:i:4:p:904-907

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:20:y:1972:i:4:p:904-907