Assessing scale reliability in citizen science motivational research: lessons learned from two case studies in Uganda
Mercy Gloria Ashepet (),
Liesbet Vranken,
Caroline Michellier,
Olivier Dewitte,
Rodgers Mutyebere,
Clovis Kabaseke,
Ronald Twongyirwe,
Violet Kanyiginya,
Grace Kagoro-Rugunda,
Tine Huyse and
Liesbet Jacobs
Additional contact information
Mercy Gloria Ashepet: Royal Museum for Central Africa
Liesbet Vranken: Department of Earth and Environmental Sciences
Caroline Michellier: Royal Museum for Central Africa
Olivier Dewitte: Royal Museum for Central Africa
Rodgers Mutyebere: Department of Earth and Environmental Sciences
Clovis Kabaseke: Mountains of the Moon University
Ronald Twongyirwe: Mbarara University of Science and Technology
Violet Kanyiginya: Royal Museum for Central Africa
Grace Kagoro-Rugunda: Mbarara University of Science and Technology
Tine Huyse: Royal Museum for Central Africa
Liesbet Jacobs: University of Amsterdam
Palgrave Communications, 2024, vol. 11, issue 1, 1-18
Abstract:
Abstract Citizen science (CS) is gaining global recognition for its potential to democratize and boost scientific research. As such, understanding why people contribute their time, energy, and skills to CS and why they (dis)continue their involvement is crucial. While several CS studies draw from existing theoretical frameworks in the psychology and volunteering fields to understand motivations, adapting these frameworks to CS research is still lagging and applications in the Global South remain limited. Here we investigated the reliability of two commonly applied psychometric tests, the Volunteer Functions Inventory (VFI) and the Theory of Planned Behaviour (TPB), to understand participant motivations and behaviour, in two CS networks in southwest Uganda, one addressing snail-borne diseases and another focused on natural hazards. Data was collected using a semi-structured questionnaire administered to the CS participants and a control group that consisted of candidate citizen scientists, under group and individual interview settings. Cronbach’s alpha, as an a priori measure of reliability, indicated moderate to low reliability for the VFI and TPB factors per CS network per interview setting. With evidence of highly skewed distributions, non-unidimensional data, correlated errors and lack of tau-equivalence, alpha’s underlying assumptions were often violated. More robust measures, McDonald’s omega and Greatest lower bound, generally showed higher reliability but confirmed overall patterns with VFI factors systematically scoring higher, and some TPB factors—perceived behavioural control, intention, self-identity, and moral obligation—scoring lower. Metadata analysis revealed that most problematic items often had weak item–total correlations. We propose that alpha should not be reported blindly without paying heed to the nature of the test, the assumptions, and the items comprising it. Additionally, we recommend caution when adopting existing theoretical frameworks to CS research and propose the development and validation of context-specific psychometric tests tailored to the unique CS landscape, especially for the Global South.
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1057/s41599-024-02873-1 Abstract (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:pal:palcom:v:11:y:2024:i:1:d:10.1057_s41599-024-02873-1
Ordering information: This journal article can be ordered from
https://www.nature.com/palcomms/about
DOI: 10.1057/s41599-024-02873-1
Access Statistics for this article
More articles in Palgrave Communications from Palgrave Macmillan
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().