Solving an Infinite Horizon Adverse Selection Model Through Finite Policy Graphs
Hao Zhang ()
Additional contact information
Hao Zhang: Marshall School of Business, University of Southern California, Los Angeles, California 90089
Operations Research, 2012, vol. 60, issue 4, 850-864
Abstract:
This paper studies an infinite horizon adverse selection model with an underlying Markov information process. It introduces a graphic representation of continuation contracts and continuation payoff frontiers, namely finite policy graph, and provides an algorithm to approximate the optimal policy graph through iterations. The algorithm performs an additional step after each value iteration---replacing dominated points on the previous continuation payoff frontier by points on the new frontier and reevaluating the new frontier. This dominance-free reevaluation step accelerates the convergence of the continuation payoff frontiers. Numerical examples demonstrate the effectiveness of this algorithm and properties of the optimal contracts.
Keywords: stochastic games; dynamic principal-agent model; adverse selection; dynamic programming; graphs (search for similar items in EconPapers)
Date: 2012
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/opre.1120.1056 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:oropre:v:60:y:2012:i:4:p:850-864
Access Statistics for this article
More articles in Operations Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().