Epistemic Injustice in Generative AI: A Pipeline Taxonomy, Empirical Hypotheses, and Stage-Matched Governance
Joffrey Baeyaert
EthAIca: Journal of Ethics, AI and Critical Analysis, 2025, vol. 4, 417
Abstract:
Introduction: generative AI systems increasingly influence whose knowledge is represented, how meaning is framed, and who benefits from information. However, these systems frequently perpetuate epistemic injustices—structural harms that compromise the credibility, intelligibility, and visibility of marginalized communities. Objective: this study aims to systematically analyze how epistemic injustices emerge across the generative AI pipeline and to propose a framework for diagnosing, testing, and mitigating these harms through targeted design and governance strategies. Method: a mutually exclusive and collectively exhaustive (MECE) taxonomy is developed to map testimonial, hermeneutical, and distributive injustices onto four development stages: data collection, model training, inference, and dissemination. Building on this framework, four theory-driven hypotheses (H1–H4) are formulated to connect design decisions to measurable epistemic harms. Two hypotheses—concerning role-calibrated explanations (H3) and opacity-induced deference (H4)—are empirically tested through a PRISMA-style meta-synthesis of 21 behavioral studies. Results: findings reveal that AI opacity significantly increases deference to system outputs (effect size d ≈ 0,46–0,58), reinforcing authority biases. In contrast, explanations aligned with stakeholder roles enhance perceived trustworthiness and fairness (d ≈ 0,40–0,84). These effects demonstrate the material impact of design choices on epistemic outcomes. Conclusions: Epistemic justice should not be treated as a post hoc ethical concern but as a designable, auditable property of AI systems. We propose stage-specific governance interventions—such as participatory data audits, semantic drift monitoring, and role-sensitive explanation regimes—to embed justice across the pipeline. This framework supports the development of more accountable, inclusive generative AI.
Date: 2025
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:dbk:ethaic:v:4:y:2025:i::p:417:id:417
DOI: 10.56294/ai2025417
Access Statistics for this article
More articles in EthAIca: Journal of Ethics, AI and Critical Analysis from AG Editor
Bibliographic data for series maintained by Javier Gonzalez-Argote ().