EconPapers    
Economics at your fingertips  
 

Assessing the potential of a Bayesian ranking as an alternative to consensus meetings for decision making in research funding: A Case Study of Marie Skłodowska-Curie Actions

Rachel Heyard, David Pina, Ivan Buljan and Ana Marusic

No c4ujg_v1, MetaArXiv from Center for Open Science

Abstract: Funding agencies rely on panel or consensus meetings to summarise individual evaluations of grant proposals into a final ranking. However, previous research has shown inconsistency in decisions and inefficiency of consensus meetings. Using data from the Marie Skłodowska-Curie Actions, we aimed at investigating the differences between an algorithmic approach to summarise the information from grant proposal individual evaluations to decisions after consensus meetings, and we present an exploratory comparative analysis. The algorithmic approach employed was a Bayesian hierarchical model resulting in a Bayesian ranking of the proposals using the individual evaluation reports cast prior to the consensus meeting. Parameters from the Bayesian hierarchical model and the subsequent ranking were compared to the scores, ranking and decisions established in the consensus meeting reports. The results from the evaluation of 1,006 proposals submitted to three panels (Life Science, Mathematics, Social Sciences and Humanities) in two call years (2015 and 2019) were investigated in detail. Overall, we found large discrepancies between the consensus reports and the scores a Bayesian hierarchical model would have predicted. The discrepancies were less pronounced when the scores were aggregated into funding rankings or decisions. The best agreement between the final funding ranking can be observed in the case of funding schemes with very low success rates. While we set out to understand if algorithmic approaches, with the aim of summarising individual evaluation scores, could replace consensus meetings, we concluded that currently individual scores assigned prior to the consensus meetings are not useful to predict the final funding outcomes of the proposals. Following our results, we would suggest to use individual evaluations for a triage and subsequently not discuss the weakest proposals in panel or consensus meetings. This would allow a more nuanced evaluation of a smaller set of proposals and help minimise the uncertainty and biases when allocating funding.

Date: 2024-06-18
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://osf.io/download/6643846be8eec564406bebee/

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:osf:metaar:c4ujg_v1

DOI: 10.31219/osf.io/c4ujg_v1

Access Statistics for this paper

More papers in MetaArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().

 
Page updated 2025-04-05
Handle: RePEc:osf:metaar:c4ujg_v1