Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
Joshua G A Cashaback,
Heather R McGregor,
Ayman Mohatarem and
Paul L Gribble
PLOS Computational Biology, 2017, vol. 13, issue 7, 1-28
Abstract:
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.Author Summary: Whether serving a tennis ball on a gusty day or walking over an unpredictable surface, the human nervous system has a remarkable ability to account for uncertainty when performing goal-directed actions. Here we address how different types of feedback, error and reinforcement, are used to guide such behavior during sensorimotor learning. Using a task that dissociates the optimal predictions of error-based and reinforcement-based learning, we show that the human sensorimotor system uses two distinct loss functions when deciding where to aim the hand during a reach—one that minimizes error and another that maximizes success. Interestingly, when both of these forms of feedback are available our nervous system heavily weights error feedback over reinforcement feedback.
Date: 2017
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005623 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 05623&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1005623
DOI: 10.1371/journal.pcbi.1005623
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().