Improving Policy Functions in High-Dimensional Dynamic Games
Carlos A. Manzanares,
Ying Jiang and
Patrick Bajari
No 21124, NBER Working Papers from National Bureau of Economic Research, Inc
Abstract:
In this paper, we propose a method for finding policy function improvements for a single agent in high-dimensional Markov dynamic optimization problems, focusing in particular on dynamic games. Our approach combines ideas from literatures in Machine Learning and the econometric analysis of games to derive a one-step improvement policy over any given benchmark policy. In order to reduce the dimensionality of the game, our method selects a parsimonious subset of state variables in a data-driven manner using a Machine Learning estimator. This one-step improvement policy can in turn be improved upon until a suitable stopping rule is met as in the classical policy function iteration approach. We illustrate our algorithm in a high-dimensional entry game similar to that studied by Holmes (2011) and show that it results in a nearly 300 percent improvement in expected profits as compared with a benchmark policy.
JEL-codes: C44 C55 C57 C73 L1 (search for similar items in EconPapers)
Date: 2015-04
New Economics Papers: this item is included in nep-gth
Note: IO TWP
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.nber.org/papers/w21124.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nbr:nberwo:21124
Ordering information: This working paper can be ordered from
http://www.nber.org/papers/w21124
Access Statistics for this paper
More papers in NBER Working Papers from National Bureau of Economic Research, Inc National Bureau of Economic Research, 1050 Massachusetts Avenue Cambridge, MA 02138, U.S.A.. Contact information at EDIRC.
Bibliographic data for series maintained by ().