EconPapers    
Economics at your fingertips  
 

Conjectures on the policy function in the presence of optimal experimentation

Hans Amman and David Kendrick

No 12-09, Working Papers from Utrecht School of Economics

Abstract: In the economics literature there are two dominant approaches for solving models with optimal experimentation (also called active learning). The first approach is based on the value function and the second on an approximation method. In principle the value function approach is the preferred method. However, it suffers from the curse of dimensionality and is only applicable to small problems with a limited number of policy variables. The approximation method allows for a computationally larger class of models, but may produce results that deviate from the optimal solution. Our simulations indicate that when the effects of learning are limited, the differences may be small. However, when there is sufficient scope for learning, the value function solution is more aggressive in the use of the policy variable.

Keywords: design of fiscal policy; optimal experimentation; stochastic optimization; time-varying parameters; numerical experiments (search for similar items in EconPapers)
Date: 2012
New Economics Papers: this item is included in nep-cmp
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://dspace.library.uu.nl/bitstream/handle/1874/272433/12-09.pdf (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:use:tkiwps:1209

Ordering information: This working paper can be ordered from

Access Statistics for this paper

More papers in Working Papers from Utrecht School of Economics Contact information at EDIRC.
Bibliographic data for series maintained by Marina Muilwijk ().

 
Page updated 2025-04-01
Handle: RePEc:use:tkiwps:1209