When AI turns grant evaluation into a lottery
Martin Bulla and
Peter Mikula
Additional contact information
Martin Bulla: Max Planck Institute For Ornithology
No d8gcu_v1, MetaArXiv from Center for Open Science
Abstract:
Research funding schemes are increasingly struggling to reliably distinguish scientific merit through traditional scoring. Using the most recent evaluations of the EU Marie Skłodowska-Curie Actions postdoctoral fellowships as a case study, we show how the rapid institutional adoption of Large Language Models coincides with unprecedented score compression. With only ~5% of proposals now falling below the 70% quality threshold, down from ~20% in previous years. We argue that “excellence saturation” has reached a tipping point that exposes the structural limits of fine-grained peer review and alters reviewer decision-making dynamics where funding decisions resemble a lottery. This shift to AI-assisted grant writing effectively decouples a proposal’s form from its scientific substance, necessitating a transition from fine-grained ranking toward managing an abundance of excellence through alternative allocation mechanisms, such as funding lotteries.
Date: 2026-02-19
References: Add references at CitEc
Citations:
Downloads: (external link)
https://osf.io/download/6995d43acfedca64a27af9bb/
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:osf:metaar:d8gcu_v1
DOI: 10.31219/osf.io/d8gcu_v1
Access Statistics for this paper
More papers in MetaArXiv from Center for Open Science
Bibliographic data for series maintained by OSF ().