Look twice: A generalist computational model predicts return fixations across tasks and species
Mengmi Zhang,
Marcelo Armendariz,
Will Xiao,
Olivia Rose,
Katarina Bendtz,
Margaret Livingstone,
Carlos Ponce and
Gabriel Kreiman
PLOS Computational Biology, 2022, vol. 18, issue 11, 1-38
Abstract:
Primates constantly explore their surroundings via saccadic eye movements that bring different parts of an image into high resolution. In addition to exploring new regions in the visual field, primates also make frequent return fixations, revisiting previously foveated locations. We systematically studied a total of 44,328 return fixations out of 217,440 fixations. Return fixations were ubiquitous across different behavioral tasks, in monkeys and humans, both when subjects viewed static images and when subjects performed natural behaviors. Return fixations locations were consistent across subjects, tended to occur within short temporal offsets, and typically followed a 180-degree turn in saccadic direction. To understand the origin of return fixations, we propose a proof-of-principle, biologically-inspired and image-computable neural network model. The model combines five key modules: an image feature extractor, bottom-up saliency cues, task-relevant visual features, finite inhibition-of-return, and saccade size constraints. Even though there are no free parameters that are fine-tuned for each specific task, species, or condition, the model produces fixation sequences resembling the universal properties of return fixations. These results provide initial steps towards a mechanistic understanding of the trade-off between rapid foveal recognition and the need to scrutinize previous fixation locations.Author summary: We move our eyes several times a second, bringing the center of gaze into focus and high resolution. While we typically assume that we can rapidly recognize the contents at each fixation, it turns out that we often move our eyes back to previously visited locations. These return fixations are ubiquitous across different tasks, conditions, and across species. A computational model captures these eye movements and return fixations by using four key mechanisms: extraction of salient parts of an image, incorporation of task goals such as the target during visual search, a constraint to avoid making large eye movements, and forgetful memory of previous locations. Neither the extreme of getting stuck at a single location or the extreme of never revisiting previous locations seems adequate for visual processing. Instead, the combination of these four mechanisms allows the visual system to achieve a happy medium during scene understanding.
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010654 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 10654&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1010654
DOI: 10.1371/journal.pcbi.1010654
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().