Calibration and Expert Testing
Wojciech Olszewski
Chapter 18 in Handbook of Game Theory with Economic Applications, 2015, vol. 4, pp 949-984 from Elsevier
Abstract:
I survey and discuss the recent literature on testing experts or probabilistic forecasts, which I would describe as a literature on “strategic hypothesis testing†The starting point of this literature is some surprising results of the following type: suppose that a criterion forjudging probabilistic forecasts (which I will call a test) has the property that if data are generated by a probabilistic model, then forecasts generated by that model pass the test. It, then, turns out an agent who knows only the test by which she is going to be judged, but knows nothing about the data-generating process, is able to pass the test by generating forecasts strategically.
Keywords: Probabilistic models; Calibration and other tests; Strategic forecasters; C18; C70 (search for similar items in EconPapers)
Date: 2015
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (9)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/B9780444537669000185
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:gamchp:v:4:y:2015:i:c:p:949-984
DOI: 10.1016/B978-0-444-53766-9.00018-5
Access Statistics for this chapter
More chapters in Handbook of Game Theory with Economic Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().