EconPapers    
Economics at your fingertips  
 

On the iterated estimation of dynamic discrete choice games

Federico Bugni and Jackson Bunting

No CWP13/18, CeMMAP working papers from Centre for Microdata Methods and Practice, Institute for Fiscal Studies

Abstract: We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature. By considering a "maximum likelihood" criterion function, our estimator becomes the K- ML estimator in Aguirregabiria and Mira (2002, 2007). By considering a "minimum distance" criterion function, it de nes a new K-MD estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007). First, we establish that the K-ML estimator is consistent and asymptotically normal for any K. This complements ndings in Aguirregabiria and Mira (2007), who focus on K = 1 and K large enough to induce convergence of the estimator. Furthermore, we show that the asymptotic variance of the K-ML estimator can exhibit arbitrary patterns as a function K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for any K. For a specifi c weight matrix, the K-MD estimator has the same asymptotic distribution as the K-ML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. This new result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-ML estimators. Our main result implies two new and important corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators for all K. In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-ML estimator for all K.

Keywords: dynamic discrete choice problems; dynamic games; pseudo maximum likelihood estimator; minimum distance estimator; estimation; asymptotic efficiency (search for similar items in EconPapers)
JEL-codes: C13 C61 C73 (search for similar items in EconPapers)
Date: 2018-02-16
New Economics Papers: this item is included in nep-gth and nep-ore
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.ifs.org.uk/uploads/CWP131818.pdf (application/pdf)
Our link check indicates that this URL is bad, the error code is: 404 Not Found (https://www.ifs.org.uk/uploads/CWP131818.pdf [302 Found]--> https://ifs.org.uk/uploads/CWP131818.pdf)

Related works:
Journal Article: On the Iterated Estimation of Dynamic Discrete Choice Games (2021) Downloads
Working Paper: On the iterated estimation of dynamic discrete choice games (2020) Downloads
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:ifs:cemmap:13/18

Ordering information: This working paper can be ordered from
The Institute for Fiscal Studies 7 Ridgmount Street LONDON WC1E 7AE

Access Statistics for this paper

More papers in CeMMAP working papers from Centre for Microdata Methods and Practice, Institute for Fiscal Studies The Institute for Fiscal Studies 7 Ridgmount Street LONDON WC1E 7AE. Contact information at EDIRC.
Bibliographic data for series maintained by Emma Hyman ().

 
Page updated 2025-03-31
Handle: RePEc:ifs:cemmap:13/18