EconPapers    
Economics at your fingertips  
 

Publication bias impacts on effect size, statistical power, and magnitude (Type M) and sign (Type S) errors in ecology and evolutionary biology

Yefeng Yang, Alfredo Sánchez-Tójar, Rose E O'Dea, Daniel W.A. Noble, Julia Koricheva, Michael D Jennions, Timothy H. Parker, Malgorzata Lagisz and Shinichi Nakagawa
Additional contact information
Yefeng Yang: City University of Hong Kong
Alfredo Sánchez-Tójar: Bielefeld University
Daniel W.A. Noble: University of New South Wales
Timothy H. Parker: Whitman College
Malgorzata Lagisz: University of New South Wales
Shinichi Nakagawa: University of New South Wales

No 97nv6, EcoEvoRxiv from Center for Open Science

Abstract: Collaborative assessments of direct replicability of empirical studies in the medical and social sciences have exposed alarmingly low rates of replicability, a phenomenon dubbed the ‘replication crisis’. Poor replicability has spurred cultural changes targeted at improving reliability in these disciplines. Given the absence of equivalent replication projects in ecology and evolutionary biology, two inter-related indicators offer us the possibility to retrospectively assess replicability: publication bias and statistical power. This registered report assesses the prevalence and severity of small-study (i.e., smaller studies reporting larger effect sizes) and decline effects (i.e., effect sizes decreasing over time) across ecology and evolutionary biology using 87 meta-analyses including 4,250 primary studies and 17,638 effect sizes. Further, we estimate how publication bias might distort the estimation of effect sizes, statistical power, and errors in magnitude (Type M or exaggeration ratio) and sign (Type S). We show strong evidence for the pervasiveness of both small-study and decline effects in ecology and evolution. There was widespread prevalence of publication bias that resulted in meta-analytic means being over-estimated by (at least) 0.12 standard deviations. The prevalence of publication bias distorted confidence in meta-analytic results with 66% of initially statistically significant meta-analytic means becoming non-significant after correcting for publication bias. Ecological and evolutionary studies consistently had a low statistical power (15%) with a 4-fold exaggeration of effects on average (Type M error rates = 4.4). Notably, publication bias aggravates low power (from 23% to 15%) and type M error rates (from 2.7 to 4.4) because it creates a non-random sample of effect size evidence. The sign errors of effect sizes (Type S error) increased from 5% to 8% because of publication bias. Our research provides clear evidence that many published ecological and evolutionary findings are inflated. Our results highlight the importance of designing high-power empirical studies (e.g., via collaborative team science), promoting and encouraging replication studies, testing and correcting for publication bias in meta-analyses, and embracing open and transparent research practices, such as (pre)registration, data- and code-sharing, and transparent reporting.

Date: 2022-09-12
New Economics Papers: this item is included in nep-evo
References: Add references at CitEc
Citations:

Downloads: (external link)
https://osf.io/download/631ee2c3bd60280344bd6339/

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:osf:ecoevo:97nv6

DOI: 10.31219/osf.io/97nv6

Access Statistics for this paper

More papers in EcoEvoRxiv from Center for Open Science
Bibliographic data for series maintained by OSF ().

 
Page updated 2025-03-19
Handle: RePEc:osf:ecoevo:97nv6