EconPapers    
Economics at your fingertips  
 

Multi-policy iteration with a distributed voting

Hyeong Soo Chang ()

Mathematical Methods of Operations Research, 2004, vol. 60, issue 2, 299-310

Abstract: We present a novel simulation-based algorithm, as an extension of the well-known policy iteration algorithm, by combining multi-policy improvement with a distributed simulation-based voting policy evaluation, for approximately solving Markov Decision Processes (MDPs) with infinite horizon discounted reward criterion, and analyze its performance relative to the optimal value. Copyright Springer-Verlag 2004

Keywords: Policy iteration; Distributed algorithm; Voting; Markov decision processes (search for similar items in EconPapers)
Date: 2004
References: Add references at CitEc
Citations:

Downloads: (external link)
http://hdl.handle.net/10.1007/s001860400362 (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:mathme:v:60:y:2004:i:2:p:299-310

Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/00186

DOI: 10.1007/s001860400362

Access Statistics for this article

Mathematical Methods of Operations Research is currently edited by Oliver Stein

More articles in Mathematical Methods of Operations Research from Springer, Gesellschaft für Operations Research (GOR), Nederlands Genootschap voor Besliskunde (NGB)
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-20
Handle: RePEc:spr:mathme:v:60:y:2004:i:2:p:299-310