Is Model Fitting Necessary for Model-Based fMRI?
Robert C Wilson and
Yael Niv
PLOS Computational Biology, 2015, vol. 11, issue 6, 1-21
Abstract:
Model-based analysis of fMRI data is an important tool for investigating the computational role of different brain regions. With this method, theoretical models of behavior can be leveraged to find the brain structures underlying variables from specific algorithms, such as prediction errors in reinforcement learning. One potential weakness with this approach is that models often have free parameters and thus the results of the analysis may depend on how these free parameters are set. In this work we asked whether this hypothetical weakness is a problem in practice. We first developed general closed-form expressions for the relationship between results of fMRI analyses using different regressors, e.g., one corresponding to the true process underlying the measured data and one a model-derived approximation of the true generative regressor. Then, as a specific test case, we examined the sensitivity of model-based fMRI to the learning rate parameter in reinforcement learning, both in theory and in two previously-published datasets. We found that even gross errors in the learning rate lead to only minute changes in the neural results. Our findings thus suggest that precise model fitting is not always necessary for model-based fMRI. They also highlight the difficulty in using fMRI data for arbitrating between different models or model parameters. While these specific results pertain only to the effect of learning rate in simple reinforcement learning models, we provide a template for testing for effects of different parameters in other models.Author Summary: In recent years, model-based fMRI has emerged as a powerful technique in psychology and neuroscience. With this method, computational models of behavior can be leveraged to identify where, whether and how different algorithms are implemented in the brain. Yet this approach seems to have an Achilles heel in that the models frequently have free parameters, and errors in setting these parameters could lead to errors in interpretation of the data. Here we asked whether this potential weakness, in theory, is an actual weakness in practice. In particular, we tested whether errors in estimating participants’ learning rate in a trial-and-error reinforcement learning setting would have adverse effects on identifying the neural substrates of the learning process. Amazingly, it turns out that even gross errors in the learning rate lead to only minute changes in the neural results. The good news is that precise identification of free parameters is not always necessary; the corollary bad news is that it may be harder to identify the precise computational roles of different brain areas than we had previously appreciated. Based on our analytical results, we offer suggestions for designing experiments that maximize or minimize sensitivity to model parameters, as needed.
Date: 2015
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004237 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 04237&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1004237
DOI: 10.1371/journal.pcbi.1004237
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().