EconPapers    
Economics at your fingertips  
 

Are Performance Weights Beneficial? Investigating the Random Expert Hypothesis

Deniz Marti (), Thomas A. Mazzuchi and Roger Cooke
Additional contact information
Deniz Marti: The George Washington University
Thomas A. Mazzuchi: The George Washington University

Chapter Chapter 3 in Expert Judgement in Risk and Decision Analysis, 2021, pp 53-82 from Springer

Abstract: Abstract Expert elicitation plays a prominent role in fields where the data are scarce. As consulting multiple experts is critical in expert elicitation practices, combining various expert opinions is an important topic. In the Classical Model, uncertainty distributions for the variables of interest are based on an aggregation of elicited expert percentiles. Aggregation of these expert distributions is accomplished using linear opinion pooling relying on performance-based weights that are assigned to each expert. According to the Classical Model, each expert receives a weight that is a combination of the expert’s statistical accuracy and informativeness for a set of questions, the values of which are unknown at the time the elicitation was conducted. The former measures “correspondence with reality,” a measure of discrepancy between the observed relative frequencies of seed variables’ values falling within the elicited percentile values and the expected probability based on the percentiles specified in the elicitation. The later gauges an expert’s ability to concentrate high probability mass in small interquartile intervals. Some critics argue that this performance-based model fails to outperform the models that assign experts equal weights. Their argument implies that any observed difference in expert performance is just due to random fluctuations and is not a persistent property of an expert. Experts should therefore be treated equally and equally weighted. However, if differences in experts’ performances are due to random fluctuations, then hypothetical experts created by randomly recombining the experts’ assessments should perform statistically as well as the actual experts. This hypothesis is called the random expert hypothesis. This hypothesis is investigated using 44 post-2006 professional expert elicitation studies obtained through the TU Delft database. For each study, 1000 hypothetical expert panels are simulated whose elicitations are a random mix of all expert elicitations within that study. Results indicate that actual expert statistical accuracy performance is significantly better than that of randomly created experts. The study does not consider experts’ informativeness but still provides strong support for performance-based weighting as in the Classical Model.

Date: 2021
References: Add references at CitEc
Citations: View citations in EconPapers (2)

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:isochp:978-3-030-46474-5_3

Ordering information: This item can be ordered from
http://www.springer.com/9783030464745

DOI: 10.1007/978-3-030-46474-5_3

Access Statistics for this chapter

More chapters in International Series in Operations Research & Management Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:isochp:978-3-030-46474-5_3