Testing Multiple Forecasters
Yossi Feinberg and
Colin Stewart
Additional contact information
Yossi Feinberg: Stanford U
Research Papers from Stanford University, Graduate School of Business
Abstract:
We consider a cross-calibration test of predictions by multiple potential experts in a stochastic environment. This test checks whether each expert is calibrated conditional on the predictions made by other experts. We show that this test is good in the sense that a true expert--one informed of the true distribution of the process--is guaranteed to pass the test no matter what the other potential experts do, and false experts will fail the test on all but a small (category one) set of true distributions. Furthermore, even when there is no true expert present, a test similar to cross-calibration cannot be simultaneously manipulated by multiple false experts, but at the cost of failing some true experts. In contrast, tests that allow false experts to make precise predictions can be jointly manipulated.
Date: 2007-01
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
http://gsbapps.stanford.edu/researchpapers/library/RP1957.pdf
Related works:
Journal Article: Testing Multiple Forecasters (2008) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ecl:stabus:1957
Access Statistics for this paper
More papers in Research Papers from Stanford University, Graduate School of Business Contact information at EDIRC.
Bibliographic data for series maintained by ().