EconPapers    
Economics at your fingertips  
 

Nonlinear fusion is optimal for a wide class of multisensory tasks

Marcus Ghosh, Gabriel Béna, Volker Bormuth and Dan F M Goodman

PLOS Computational Biology, 2024, vol. 20, issue 7, 1-20

Abstract: Animals continuously detect information via multiple sensory channels, like vision and hearing, and integrate these signals to realise faster and more accurate decisions; a fundamental neural computation known as multisensory integration. A widespread view of this process is that multimodal neurons linearly fuse information across sensory channels. However, does linear fusion generalise beyond the classical tasks used to explore multisensory integration? Here, we develop novel multisensory tasks, which focus on the underlying statistical relationships between channels, and deploy models at three levels of abstraction: from probabilistic ideal observers to artificial and spiking neural networks. Using these models, we demonstrate that when the information provided by different channels is not independent, linear fusion performs sub-optimally and even fails in extreme cases. This leads us to propose a simple nonlinear algorithm for multisensory integration which is compatible with our current knowledge of multimodal circuits, excels in naturalistic settings and is optimal for a wide class of multisensory tasks. Thus, our work emphasises the role of nonlinear fusion in multisensory integration, and provides testable hypotheses for the field to explore at multiple levels: from single neurons to behaviour.Author summary: Rather than relying on one sensory modality at a time, animals merge information across their senses and make decisions based on these combined signals. Imagine a predator watching a patch of long grass for prey. The grass moves, indicating the presence of prey, another animal or just the wind. The predator could resolve this ambiguity by combining their visual data, with any coincident sounds or the feel of the wind on their skin. However, how should these signals be combined? Prior work suggests that a sum should be best (i.e. sight + sound). However, we show that this strategy would perform poorly and even fail completely in many multisensory scenarios. Instead, we propose a simple nonlinear function f(sight, sound) which excels at these tasks, and could plausibly be implemented by networks of neurons.

Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012246 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 12246&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1012246

DOI: 10.1371/journal.pcbi.1012246

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-05-03
Handle: RePEc:plo:pcbi00:1012246