EconPapers    
Economics at your fingertips  
 

Participatory-informed preference optimization (PiPrO): A reinforcement learning simulation study

Tara Templin, Shuyi Song, Sophia Fort and Nasa Sinnott-Armstrong

PLOS Digital Health, 2026, vol. 5, issue 3, 1-18

Abstract: Artificial intelligence (AI) has transformative potential in public health, but its impact is limited by models that implicitly prioritize a single stakeholder perspective and do not make explicit and tunable trade-offs between community and clinician endorsement. To address this gap, we introduce Participatory-informed Preference Optimization (PiPrO), a large language model embedding-based calibration framework that generates a single clinical outcome prediction while explicitly accounting for differences between community and physician interpretations of the same scenario. PiPrO takes as input two embeddings derived from a large language model representing a community-facing context and a physician-facing context. It then applies a shared lightweight feedforward predictor to produce per-stakeholder scores which are then mixed using a single global mixing weight (alpha). Alpha controls how strongly the final prediction reflects the community versus physician responses and is learned using a policy-gradient update driven by an abundant but noisy community text and a sparse and biased physician text. PiPrO reliably learned stable alpha values and a consistent reward signal. Alpha shifts systematically toward physician weighting as community feedback becomes noisier and shifts toward community weighting as physician feedback becomes more biased. Our results suggest PiPrO’s potential to produce more transparent, and context-sensitive AI-driven healthcare recommendations. Future research should validate this approach using real-world community inputs to ensure generalizability and practical impact.Author summary: Artificial intelligence tools are increasingly adopted in medicine and public health, but they are often trained to reflect only one viewpoint. In practice, community members and physicians can interpret the same clinical situation differently, and those differences can matter for recommendations that affect care. In this study, we developed a method called Participatory-informed Preference Optimization to help a prediction model account for both perspectives while still producing one final prediction. We tested the method in a simulation study using community-facing and physician-facing versions of the same scenario, and we varied how reliable each source of feedback was. We found that the model learned a stable balance between the two perspectives. It shifted toward physician input when community feedback became less reliable, and toward community input when physician feedback became more biased. These results suggest that health-related artificial intelligence can be designed to make trade-offs between stakeholder perspectives more transparent.

Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001294 (text/html)
https://journals.plos.org/digitalhealth/article/fi ... 01294&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pdig00:0001294

DOI: 10.1371/journal.pdig.0001294

Access Statistics for this article

More articles in PLOS Digital Health from Public Library of Science
Bibliographic data for series maintained by digitalhealth ().

 
Page updated 2026-03-22
Handle: RePEc:plo:pdig00:0001294