Implicit Human Feedback for Large Language Models: A Passive-Brain Computer Interfaces Study Proposal
Diana E. Gherman () and
Thorsten O. Zander
Additional contact information
Diana E. Gherman: Brandenburg University of Technology Cottbus
Thorsten O. Zander: Brandenburg University of Technology Cottbus
A chapter in Information Systems and Neuroscience, 2025, pp 279-286 from Springer
Abstract:
Abstract Large language models (LLMs) are transforming the way we work, learn, and access information. As our dependence on these tools grows, it becomes crucial to enhance their accuracy and ensure they align with our ethical standards. The most high-performing language models are currently trained and refined with the help of explicit human feedback. Here we propose a study that investigates the feasibility of implicit human feedback through passive brain-computer interfaces (pBCIs). Two calibration paradigms for moral judgment and error-perception elicitation and detection are described. The obtained classification models will be tested in an application phase with simulated chatbot conversations. If proven successful, pBCIs could provide novel and informative human implicit feedback in the process of LLM development.
Keywords: Passive BCI; LLM; Error-processing; Moral judgement (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:lnichp:978-3-031-71385-9_24
Ordering information: This item can be ordered from
http://www.springer.com/9783031713859
DOI: 10.1007/978-3-031-71385-9_24
Access Statistics for this chapter
More chapters in Lecture Notes in Information Systems and Organization from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().