Simulation Based Inference for Dynamic Multinomial Choice Models
Daniel Houser () and
Michael Keane ()
MPRA Paper from University Library of Munich, Germany
Our goal in this chapter is to explain concretely how to implement simulation methods in a very general class of models that are extremely useful in applied work: dynamic discrete choice models where one has available a panel of multinomial choice histories and partially observed payoffs. Moreover, the techniques we describe are directly applicable to a general class of models that includes static discrete choice models, the Heckman (1976) selection model, and all of the Heckman (1981) models (such as static and dynamic Bernoulli models, Markov models, and renewal processes.) The particular procedure that we describe derives from a suggestion by Geweke and Keane (1999a), and has the advantages that it does not require the econometrician to solve the agents’ dynamic optimization problem, or to make strong assumptions about the way individuals form expectations.
Keywords: Dynamic Discrete Choice Models; Dynamic Programming; Discrete Choice; Simulation (search for similar items in EconPapers)
JEL-codes: C11 C15 C23 C25 C33 C35 C61 C63 (search for similar items in EconPapers)
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (9) Track citations by RSS feed
Published in Companion to Theoretical Econometrics Blackwell (2001): pp. 466-493
Downloads: (external link)
https://mpra.ub.uni-muenchen.de/54279/1/MPRA_paper_54279.pdf original version (application/pdf)
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:pra:mprapa:54279
Access Statistics for this paper
More papers in MPRA Paper from University Library of Munich, Germany Ludwigstraße 33, D-80539 Munich, Germany. Contact information at EDIRC.
Bibliographic data for series maintained by Joachim Winter ().