Neural nets for indirect inference
Econometrics and Statistics, 2017, vol. 2, issue C, 36-49
For simulable models, neural networks are used to approximate the limited information posterior mean, which conditions on a vector of statistics, rather than on the full sample. Because the model is simulable, training and testing samples may be generated with sizes large enough to train well a net that is large enough, in terms of number of hidden layers and neurons, to learn the limited information posterior mean with good accuracy. Targeting the limited information posterior mean using neural nets is simpler, faster, and more successful than is targeting the full information posterior mean, which conditions on the observed sample. The output of the trained net can be used directly as an estimator of the model’s parameters, or as an input to subsequent classical or Bayesian indirect inference estimation. The methods are illustrated with applications to a small dynamic stochastic general equilibrium model and a continuous time jump-diffusion model for stock index returns.
Keywords: Neural networks; Indirect inference; Approximate Bayesian computing; Machine learning; DSGE; Jump-diffusion (search for similar items in EconPapers)
References: View references in EconPapers View complete reference list from CitEc
Citations Track citations by RSS feed
Downloads: (external link)
Full text for ScienceDirect subscribers only. Contains open access articles
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:eee:ecosta:v:2:y:2017:i:c:p:36-49
Access Statistics for this article
Econometrics and Statistics is currently edited by E.J. Kontoghiorghes, H. Van Dijk and A.M. Colubi
More articles in Econometrics and Statistics from Elsevier
Series data maintained by Dana Niculescu ().