EconPapers    
Economics at your fingertips  
 

Idea Evaluation for Solutions to Specialized Problems: Leveraging the Potential of Crowds and Large Language Models

Henner Gimpel (), Robert Laubacher (), Fabian Probost (), Ricarda Schäfer () and Manfred Schoch ()
Additional contact information
Henner Gimpel: FIM Research Center for Information Management
Robert Laubacher: Massachusetts Institute of Technology
Fabian Probost: FIM Research Center for Information Management
Ricarda Schäfer: FIM Research Center for Information Management
Manfred Schoch: FIM Research Center for Information Management

Group Decision and Negotiation, 2025, vol. 34, issue 4, No 9, 903-932

Abstract: Abstract Complex problems such as climate change pose severe challenges to societies worldwide. To overcome these challenges, digital innovation contests have emerged as a promising tool for idea generation. However, assessing idea quality in innovation contests is becoming increasingly problematic in domains where specialized knowledge is needed. Traditionally, expert juries are responsible for idea evaluation in such contests. However, experts are a substantial bottleneck as they are often scarce and expensive. To assess whether expert juries could be replaced, we consider two approaches. We leverage crowdsourcing and a Large Language Model (LLM) to evaluate ideas, two approaches that are similar in terms of the aggregation of collective knowledge and could therefore be close to expert knowledge. We compare expert jury evaluations from innovation contests on climate change with crowdsourced and LLM’s evaluations and assess performance differences. Results indicate that crowds and LLMs have the ability to evaluate ideas in the complex problem domain while contest specialization—the degree to which a contest relates to a knowledge-intensive domain rather than a broad field of interest—is an inhibitor of crowd evaluation performance but does not influence the evaluation performance of LLMs. Our contribution lies with demonstrating that crowds and LLMs (as opposed to traditional expert juries) are suitable for idea evaluation and allows innovation contest operators to integrate the knowledge of crowds and LLMs to reduce the resource bottleneck of expert juries.

Keywords: Idea evaluation; Crowdsourcing; Large language model; Specialized knowledge (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1007/s10726-025-09935-y Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:grdene:v:34:y:2025:i:4:d:10.1007_s10726-025-09935-y

Ordering information: This journal article can be ordered from
http://www.springer.com/journal/10726/PS2

DOI: 10.1007/s10726-025-09935-y

Access Statistics for this article

Group Decision and Negotiation is currently edited by Gregory E. Kersten

More articles in Group Decision and Negotiation from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-10-10
Handle: RePEc:spr:grdene:v:34:y:2025:i:4:d:10.1007_s10726-025-09935-y