EconPapers    
Economics at your fingertips  
 

Structural Estimation of Markov Decision Processes in High-Dimensional State Space with Finite-Time Guarantees

Siliang Zeng (), Mingyi Hong () and Alfredo Garcia ()
Additional contact information
Siliang Zeng: Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota 55455
Mingyi Hong: Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota 55455
Alfredo Garcia: Department of Industrial and Systems Engineering, Texas A&M University College of Engineering, College Station, Texas 77843

Operations Research, 2025, vol. 73, issue 2, 720-737

Abstract: We consider the task of estimating a structural model of dynamic decisions by a human agent based on the observable history of implemented actions and visited states. This problem has an inherent nested structure: In the inner problem, an optimal policy for a given reward function is identified, whereas in the outer problem, a measure of fit is maximized. Several approaches have been proposed to alleviate the computational burden of this nested-loop structure, but these methods still suffer from high complexity when the state space is either discrete with large cardinality or continuous in high dimensions. Other approaches in the inverse reinforcement learning literature emphasize policy estimation at the expense of reduced reward estimation accuracy. In this paper, we propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces without compromising reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show the proposed algorithm converges to a stationary solution with a finite-time guarantee. Further, if the reward is parameterized linearly, the algorithm approximates the maximum likelihood estimator sublinearly.

Keywords: Machine; Learning; and; Data; Science; inverse reinforcement learning; dynamic discrete choice model (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/opre.2022.0511 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:73:y:2025:i:2:p:720-737

Access Statistics for this article

More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-04-05
Handle: RePEc:inm:oropre:v:73:y:2025:i:2:p:720-737