EconPapers    
Economics at your fingertips  
 

Technical Note—Improved Conditions for Convergence in Undiscounted Markov Renewal Programming

Loren Platzman
Additional contact information
Loren Platzman: Massachusetts Institute of Technology, Cambridge, Massachusetts

Operations Research, 1977, vol. 25, issue 3, 529-533

Abstract: In a simply connected Markov renewal problem, each state is either transient under all policies or an element of a single chain under some policy. This property is easily verified; it implies invariance of the maximal long-term average return (gain) with respect to the initial state, which in turn assures convergence of Odoni's bounds in the damped value-iteration algorithm due to Schweitzer, even when the maximal-gain process is multiple-chained and/or periodic.

Date: 1977
References: Add references at CitEc
Citations: View citations in EconPapers (3)

Downloads: (external link)
http://dx.doi.org/10.1287/opre.25.3.529 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:25:y:1977:i:3:p:529-533

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:oropre:v:25:y:1977:i:3:p:529-533