EconPapers    
Economics at your fingertips  
 

Connectomic reconstruction predicts visual features used for navigation

Dustin Garner, Emil Kind, Jennifer Yuet Ha Lai, Aljoscha Nern, Arthur Zhao, Lucy Houghton, Gizem Sancer, Tanya Wolff, Gerald M. Rubin, Mathias F. Wernet () and Sung Soo Kim ()
Additional contact information
Dustin Garner: University of California Santa Barbara
Emil Kind: Freie Universität Berlin
Jennifer Yuet Ha Lai: University of California Santa Barbara
Aljoscha Nern: Howard Hughes Medical Institute
Arthur Zhao: Howard Hughes Medical Institute
Lucy Houghton: University of California Santa Barbara
Gizem Sancer: Freie Universität Berlin
Tanya Wolff: Howard Hughes Medical Institute
Gerald M. Rubin: Howard Hughes Medical Institute
Mathias F. Wernet: Freie Universität Berlin
Sung Soo Kim: University of California Santa Barbara

Nature, 2024, vol. 634, issue 8032, 181-190

Abstract: Abstract Many animals use visual information to navigate1–4, but how such information is encoded and integrated by the navigation system remains incompletely understood. In Drosophila melanogaster, EPG neurons in the central complex compute the heading direction5 by integrating visual input from ER neurons6–12, which are part of the anterior visual pathway (AVP)10,13–16. Here we densely reconstruct all neurons in the AVP using electron-microscopy data17. The AVP comprises four neuropils, sequentially linked by three major classes of neurons: MeTu neurons10,14,15, which connect the medulla in the optic lobe to the small unit of the anterior optic tubercle (AOTUsu) in the central brain; TuBu neurons9,16, which connect the AOTUsu to the bulb neuropil; and ER neurons6–12, which connect the bulb to the EPG neurons. On the basis of morphologies, connectivity between neural classes and the locations of synapses, we identify distinct information channels that originate from four types of MeTu neurons, and we further divide these into ten subtypes according to the presynaptic connections in the medulla and the postsynaptic connections in the AOTUsu. Using the connectivity of the entire AVP and the dendritic fields of the MeTu neurons in the optic lobes, we infer potential visual features and the visual area from which any ER neuron receives input. We confirm some of these predictions physiologically. These results provide a strong foundation for understanding how distinct sensory features can be extracted and transformed across multiple processing stages to construct higher-order cognitive representations.

Date: 2024
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41586-024-07967-z Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:nature:v:634:y:2024:i:8032:d:10.1038_s41586-024-07967-z

Ordering information: This journal article can be ordered from
https://www.nature.com/

DOI: 10.1038/s41586-024-07967-z

Access Statistics for this article

Nature is currently edited by Magdalena Skipper

More articles in Nature from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:nature:v:634:y:2024:i:8032:d:10.1038_s41586-024-07967-z