EconPapers    
Economics at your fingertips  
 

Signs of consciousness in AI: Can GPT-3 tell how smart it really is?

Ljubiša Bojić (), Irena Stojković () and Zorana Jolić Marjanović ()
Additional contact information
Ljubiša Bojić: The Institute for Artificial Intelligence Research and Development of Serbia
Irena Stojković: Faculty of Special Education and Rehabilitation
Zorana Jolić Marjanović: Faculty of Philosophy

Palgrave Communications, 2024, vol. 11, issue 1, 1-15

Abstract: Abstract The emergence of artificial intelligence (AI) is transforming how humans live and interact, raising both excitement and concerns—particularly about the potential for AI consciousness. For example, Google engineer Blake Lemoine suggested that the AI chatbot LaMDA might become sentient. At that time, GPT-3 was one of the most powerful publicly available language models, capable of simulating human reasoning to a certain extent. The notion of GPT-3 having some degree of consciousness could be linked to its ability to produce human-like responses, hinting at a basic level of understanding. To explore this further, we administered both objective and self-assessment tests of cognitive (CI) and emotional intelligence (EI) to GPT-3. Results showed that GPT-3 outperformed average humans on CI tests requiring the use and demonstration of acquired knowledge. However, its logical reasoning and EI capacities matched those of an average human. GPT-3’s self-assessments of CI and EI didn’t always align with its objective performance, with variations comparable to different human subsamples (e.g., high performers, males). A further discussion considered whether these results signal emerging subjectivity and self-awareness in AI. Future research should examine various language models to identify emergent properties of AI. The goal is not to discover machine consciousness itself, but to identify signs of its development, occurring independently of training and fine-tuning processes. If AI is to be further developed and widely deployed in human interactions, creating empathic AI that mimics human behavior is essential. The rapid advancement toward superintelligence requires continuous monitoring of AI’s human-like capabilities, particularly in general-purpose models, to ensure safety and alignment with human values.

Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1057/s41599-024-04154-3 Abstract (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:pal:palcom:v:11:y:2024:i:1:d:10.1057_s41599-024-04154-3

Ordering information: This journal article can be ordered from
https://www.nature.com/palcomms/about

DOI: 10.1057/s41599-024-04154-3

Access Statistics for this article

More articles in Palgrave Communications from Palgrave Macmillan
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:pal:palcom:v:11:y:2024:i:1:d:10.1057_s41599-024-04154-3