Machine learning for dynamic incentive problems
Philipp Renner and
Simon Scheidegger
No 203620397, Working Papers from Lancaster University Management School, Economics Department
Abstract:
We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems with hidden states. We first combine set-valued dynamic programming techniques with Bayesian Gaussian mixture models to determine irregularly shaped equilibrium value correspondences. Second, we generate training data from those pre-computed feasible sets to recursively solve the dynamic incentive problem by a massively parallelized Gaussian process machine learning algorithm. This combination enables us to analyze models of a complexity that was previously considered to be intractable. To demonstrate the broad applicability of our framework, we compute solutions for models of repeated agency with history dependence, many types, and varying preferences.
Keywords: Dynamic Contracts; Principal-Agent Model; Dynamic Programming; Machine Learning; Gaussian Processes; High-performance Computing (search for similar items in EconPapers)
JEL-codes: C61 C73 D82 D86 E61 (search for similar items in EconPapers)
Date: 2017
New Economics Papers: this item is included in nep-big, nep-cmp, nep-cta, nep-mac, nep-mic and nep-ore
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://www.lancaster.ac.uk/media/lancaster-univers ... casterWP2017_027.pdf (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:lan:wpaper:203620397
Access Statistics for this paper
More papers in Working Papers from Lancaster University Management School, Economics Department Contact information at EDIRC.
Bibliographic data for series maintained by Giorgio Motta ().