Polynomial-Time Computation of Strong and n -Present-Value Optimal Policies in Markov Decision Chains
Michael O’Sullivan () and
Arthur F. Veinott, Jr.
Additional contact information
Michael O’Sullivan: Department of Engineering Science, University of Auckland, Auckland 1142, New Zealand
Arthur F. Veinott, Jr.: Formerly at the Department of Management Science and Engineering, Stanford University, Stanford California
Mathematics of Operations Research, 2017, vol. 42, issue 3, 577-598
Abstract:
This paper studies the problem of finding a stationary strong present-value optimal and, more generally, an n -present-value optimal, policy for an infinite-horizon stationary finite-state-action substochastic Markov decision chain. A policy is strong present-value optimal if it has maximum present value for all small positive interest rates ρ . A policy is n - present-value optimal if its present value falls below the maximum present value for all small positive ρ by O ( ρ n +1 ). The importance of stationary n -present-value optimal policies arises from the following known facts. The set of such policies diminishes with n and, for n ≥ S , is precisely the set of stationary strong present-value optimal policies. For 0 ≤ n < S , stationary n -present-value optimal policies are nearly strong present-value optimal and are of independent interest. The best algorithms for finding stationary strong present-value optimal policies find stationary n -present-value policies for n = −1,…, S in that order. This paper decomposes the problem of finding a stationary n -present-value optimal policy given a stationary ( n − 1)-present-value optimal policy into a sequence of three subproblems, each entailing either maximizing transient value or reward rate. It is known that both subproblem types can be represented as linear programs and solved in polynomial time, e.g., by interior-point and ellipsoid methods. This paper also establishes the following results. The size of the input to each subproblem is polynomial in the size of the original problem data. A stationary strong present-value (respectively, n -present-value) optimal policy can be found in polynomial time. For the case of unique-transition systems, i.e., each action in a state sends the system to at most one state, a stationary strong present-value (respectively, n -present-value) optimal policy can be found in strongly polynomial time using combinatorial methods on the subproblems. The last case includes standard deterministic dynamic programs.
Keywords: dynamic programming; computational complexity; infinite horizon (search for similar items in EconPapers)
Date: 2017
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1287/moor.2016.0812 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ormoor:v:42:y:2017:i:3:p:577-598
Access Statistics for this article
More articles in Mathematics of Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().