Economics at your fingertips  

Unbounded dynamic programming via the Q-transform

Qingyin Ma, John Stachurski and Alexis Akira Toda

Journal of Mathematical Economics, 2022, vol. 100, issue C

Abstract: We propose a new approach to solving dynamic decision problems with unbounded rewards based on the transformations used in Q-learning. In our case, however, the objective of the transform is not learning. Rather, it is to convert an unbounded dynamic program into a bounded one. The approach is general enough to handle problems for which existing methods struggle, and yet simple relative to other techniques and accessible for applied work. We show by example that a variety of common decision problems satisfy our conditions.

Keywords: Dynamic programming; Optimality; Reinforcement learning (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: Track citations by RSS feed

Downloads: (external link)
Full text for ScienceDirect subscribers only

Related works:
Working Paper: Unbounded Dynamic Programming via the Q-Transform (2021) Downloads
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link:

DOI: 10.1016/j.jmateco.2022.102652

Access Statistics for this article

Journal of Mathematical Economics is currently edited by Atsushi (A.) Kajii

More articles in Journal of Mathematical Economics from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

Page updated 2023-06-15
Handle: RePEc:eee:mateco:v:100:y:2022:i:c:s0304406822000143