LSPI-CAS: Least-Squares Policy Iteration for Compact Action Set MDPs
Shahid Abdulla Mohammed () and
Bhatnagar Shalabh ()
Additional contact information
Shahid Abdulla Mohammed: Indian Institute of Management Kozhikode
Bhatnagar Shalabh: Indian Institute of Management Kozhikode
No 293, Working papers from Indian Institute of Management Kozhikode
Abstract:
Simulation-based, model-free solutions to Markov Decision Processes (MDPs) using the algorithm Least Squares Policy Iteration (LSPI) have been applied to multiple practical settings and in several variants. An optimal policy in an MDP is that policy, or a description of which action to take in a state of the MDP, which performs best according to a given metric such as infinite-horizon d iscounted c ost. A s imulation-based a lgorithm f or a n M DP o btains the optimal policy for an MDP in a model-free manner, i.e. without requiring to know apriori any transition probabilities of the MDP under any policy. This work proposes LSPI-CAS, a version of LSPI for compact action-sets, thus avoiding the discretization of the available action set in a state and thereby improving control over the system. Regular LSPI works by repeatedly picking the current best action in a state x from a finite feasible set of actions Ax, requiring finding a minimum over |Ax| values. Our variant uses two kinds of parametrization, a feature vector f(x) for the state called the actor, and ?(x, a) for the state-action pair which is the critic. LSPI-CAS employs a stochastic gradient algorithm called Simultaneous Perturbation Stochastic Approximation (SPSA) to update the actor in each iteration. Regular LSPI has a module named Least-Squares Q-Value (LSQ) which we employ as critic to evaluate perturbed policy iterates, and further update the policy iterate in direction of improving performance. Our algorithm is for infinite-horizon discounted-cost/reward MDPs, the case of finite-horizon compact-action set MDPs having been solved in an earlier work. Numerical results on three settings, a. control of an inverted pendulum, b. Exercise Policy calculation for an American Put option, and c. M/ M/1/K queue control problem are provided for the algorithm, alongside comparison with LSPI. Improvements in both performance and run-time to find an optimal policy are demonstrated.
Keywords: Markov; Decision; Processes (search for similar items in EconPapers)
Pages: 26 pages
Date: 2018-06
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://iimk.ac.in/websiteadmin/FacultyPublication ... June%20Fullpaper.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:iik:wpaper:293
Access Statistics for this paper
More papers in Working papers from Indian Institute of Management Kozhikode IIMK Campus PO, Kunnamanagalam, Kozhikode, Kerala, India -673570. Contact information at EDIRC.
Bibliographic data for series maintained by Sudheesh Kumar ().