EconPapers    
Economics at your fingertips  
 

Rapid runtime learning by curating small datasets of high-quality items obtained from memory

Joseph Scott German, Guofeng Cui, Chenliang Xu and Robert A Jacobs

PLOS Computational Biology, 2023, vol. 19, issue 10, 1-32

Abstract: We propose the “runtime learning” hypothesis which states that people quickly learn to perform unfamiliar tasks as the tasks arise by using task-relevant instances of concepts stored in memory during mental training. To make learning rapid, the hypothesis claims that only a few class instances are used, but these instances are especially valuable for training. The paper motivates the hypothesis by describing related ideas from the cognitive science and machine learning literatures. Using computer simulation, we show that deep neural networks (DNNs) can learn effectively from small, curated training sets, and that valuable training items tend to lie toward the centers of data item clusters in an abstract feature space. In a series of three behavioral experiments, we show that people can also learn effectively from small, curated training sets. Critically, we find that participant reaction times and fitted drift rates are best accounted for by the confidences of DNNs trained on small datasets of highly valuable items. We conclude that the runtime learning hypothesis is a novel conjecture about the relationship between learning and memory with the potential for explaining a wide variety of cognitive phenomena.Author summary: Human cognition is remarkably flexible, with the ability to reliably perform a wide variety of tasks, including idiosyncratic and unfamiliar ones, in a wide variety of contexts. However, despite impressive advances in machine learning, leading to artificial intelligences capable of outperforming humans on select individual tasks, this flexibility has yet to be replicated by computational models. Doing so is essential to both creating general artificial intelligence and to achieving a deeper understanding of the human brain and mind. To account for this vital property of human cognition, we introduce the “runtime learning hypothesis”, in which the brain rapidly constructs task-specific models in response to the needs of the moment using an internal training process based on valuable exemplars of the relevant concepts. We examine the plausibility of this hypothesis using human behavioral experiments and machine-learning-based computational modeling. The results are consistent with the human brain using runtime learning to flexibly perform tasks.

Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011445 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 11445&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1011445

DOI: 10.1371/journal.pcbi.1011445

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-05-03
Handle: RePEc:plo:pcbi00:1011445