"Calibeating": beating forecasters at their own game
Dean Foster () and
Sergiu Hart
Additional contact information
Dean Foster: Amazon
Theoretical Economics, 2023, vol. 18, issue 4
Abstract:
In order to identify expertise, forecasters should not be tested by their calibration score, which can always be made arbitrarily small, but rather by their Brier score. The Brier score is the sum of the calibration score and the refinement score; the latter measures how good the sorting into bins with the same forecast is, and thus attests to “expertise.” This raises the question of whether one can gain calibration without losing expertise, which we refer to as “calibeating.” We provide an easy way to calibeat any forecast, by a deterministic online procedure. We moreover show that calibeating can be achieved by a stochastic procedure that is itself calibrated, and then extend the results to simultaneously calibeating multiple procedures, and to deterministic procedures that are continuously calibrated.
Keywords: Forecasting; calibration; experts; Brier score; refinement score (search for similar items in EconPapers)
JEL-codes: C7 D8 (search for similar items in EconPapers)
Date: 2023-11-09
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://econtheory.org/ojs/index.php/te/article/viewFile/20231441/37901/1140 (application/pdf)
Related works:
Working Paper: "Calibeating": Beating Forecasters at Their Own Game (2022) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:the:publsh:5330
Access Statistics for this article
Theoretical Economics is currently edited by Simon Board, Todd D. Sarver, Juuso Toikka, Rakesh Vohra, Pierre-Olivier Weill
More articles in Theoretical Economics from Econometric Society
Bibliographic data for series maintained by Martin J. Osborne ().