Prompt-Engineering Testing ChatGPT4 and Bard for Assessing Generative-AI Efficacy to Support Decision-Making
Bruce Garvey and
Adam D. M. Svendsen
Additional contact information
Bruce Garvey: Strategy Foresight Limited
Adam D. M. Svendsen: Norwegian Defence University College (NDUC/FHS)
Chapter Chapter 10 in Navigating Uncertainty Using Foresight Intelligence, 2024, pp 167-212 from Springer
Abstract:
Abstract In this chapter, we examine what the Generative-AI (Gen-AI) systems of OpenAI’s ChatGPT4 and Google’s Bard (from 2024, re-named Gemini) can offer during each stage of the Strategic Options Analysis (SOA) process. Using a prompt-engineering approach, the work in this chapter has been conducted through running a series of parallel tests of ChatGPT4 and Bard at each stage of the SOA process, resulting in a number of outputs and findings that are presented alongside one another for ready comparison purposes. Beginning with the rationale for and development of a ‘focus question’, the Gen-AI systems are subsequently tasked on that basis following on from a version conducted manually. The chapter moves through the testing procedure, before delving into depth during the course of each stage of the SOA Process Sequence. The differences in ChatGPT4 and Bard outputs are displayed one after another in a highly comparative manner. They soon demonstrated their strengths and weaknesses, including as their outputs varied over time, such as during the two consecutive days in early June 2023 when the Gen-AI tests were conducted and run in parallel. Offering some preliminary conclusions and takeaways, in the section focused on Current Prompting Advice, answers are tabled as to the key question asked: Is Gen-AI/ChatGPT better than a manual process? Responses in this section set the scene for the presentation of some overall conclusions and takeaways in the form of both specific and more general insights. Ultimately, this area continues to be one to watch closely, recalling that the clue is in the name of ‘artificial intelligence’. It is always a requirement to further verify the Gen-AI outputs alongside both ‘human’ and ‘real’ intelligence. In addition, users should properly assess sources, whether they and their province are kept ‘classified’ for a whole slew of legitimate confidentiality reasons, relating to security, privacy, intentions and methods-used requirements.
Keywords: ChatGPT4; Bard; Generative-AI (Gen-AI); Decision support; Decision-making; Uncertainty; Artificial intelligence (AI); Intelligence Engineering (IE); Strategic Options Analysis (SOA); Prompt engineering (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:mgmchp:978-3-031-66115-0_10
Ordering information: This item can be ordered from
http://www.springer.com/9783031661150
DOI: 10.1007/978-3-031-66115-0_10
Access Statistics for this chapter
More chapters in Management for Professionals from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().