EconPapers    
Economics at your fingertips  
 

Comparing Human-Only, AI-Assisted, and AI-Led Teams on Assessing Research Reproducibility in Quantitative Social Science

Abel Brodeur, David Valenta, Alexandru Marcoci, Juan P. Aparicio (), Derek Mikola (), Bruno Barbarioli, Rohan Alexander, Lachlan Deer and Tom Stafford
Additional contact information
David Valenta: University of Ottawa
Alexandru Marcoci: University of Nottingham
Juan P. Aparicio: University of Ottawa
Derek Mikola: University of Ottawa
Bruno Barbarioli: University of Ottawa
Rohan Alexander: University of Toronto
Tom Stafford: University of Sheffield

No 17645, IZA Discussion Papers from IZA Network @ LISER

Abstract: This study evaluates the effectiveness of varying levels of human and artificial intelligence (AI) integration in reproducibility assessments. We computationally reproduced quantitative results from published articles in the social sciences with 288 researchers, randomly assigned to 103 teams across three groups — human-only teams, AI-assisted teams and teams whose task was to minimally guide an AI to conduct reproducibility checks (the "AI-led" approach). Findings reveal that when working independently, human teams matched the reproducibility success rates of teams using AI assistance, while both groups substantially outperformed AI-led approaches (with human teams achieving 57 pp higher success rates than AI-led teams). Human teams found significantly more major errors compared to both AI-assisted teams and AI-led teams. AI-assisted teams demonstrated an advantage over more automated approaches, detecting 0.4 more major errors per team than AI-led teams, though still significantly fewer than human-only teams. Finally, both human and AI-assisted teams significantly outperformed AI-led approaches in both proposing and implementing comprehensive robustness checks.

Keywords: artificial intelligence; reproducibility; coding error; robustness (search for similar items in EconPapers)
JEL-codes: A14 C18 (search for similar items in EconPapers)
Pages: 52 pages
Date: 2025-01
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://docs.iza.org/dp17645.pdf (application/pdf)

Related works:
Working Paper: Comparing Human-Only, AI-Assisted, and AI-Led Teams on Assessing Research Reproducibility in Quantitative Social Science (2025) Downloads
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:iza:izadps:dp17645

Access Statistics for this paper

More papers in IZA Discussion Papers from IZA Network @ LISER Contact information at EDIRC.
Bibliographic data for series maintained by Mark Fallak ().

 
Page updated 2026-03-06
Handle: RePEc:iza:izadps:dp17645