EconPapers    
Economics at your fingertips  
 

A further remark on dynamic programming for partially observed Markov processes

V.S.Vivek S. Borkar and Amarjit Budhiraja

Stochastic Processes and their Applications, 2004, vol. 112, issue 1, 79-93

Abstract: In (Stochastic Process. Appl. 103 (2003) 293), a pair of dynamic programming inequalities were derived for the 'separated' ergodic control problem for partially observed Markov processes, using the 'vanishing discount' argument. In this note, we strengthen these results to derive a single dynamic programming equation for the same.

Keywords: Controlled; Markov; processes; Dynamic; programming; Partial; observations; Ergodic; cost; Vanishing; discount; Pseudo-atom (search for similar items in EconPapers)
Date: 2004
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0304-4149(04)00021-3
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:spapps:v:112:y:2004:i:1:p:79-93

Ordering information: This journal article can be ordered from
http://http://www.elsevier.com/wps/find/supportfaq.cws_home/regional
https://shop.elsevie ... _01_ooc_1&version=01

Access Statistics for this article

Stochastic Processes and their Applications is currently edited by T. Mikosch

More articles in Stochastic Processes and their Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:spapps:v:112:y:2004:i:1:p:79-93