EconPapers    
Economics at your fingertips  
 

Deep image reconstruction from human brain activity

Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani

PLOS Computational Biology, 2019, vol. 15, issue 1, 1-23

Abstract: The mental contents of perception and imagery are thought to be encoded in hierarchical representations in the brain, but previous attempts to visualize perceptual contents have failed to capitalize on multiple levels of the hierarchy, leaving it challenging to reconstruct internal imagery. Recent work showed that visual cortical activity measured by functional magnetic resonance imaging (fMRI) can be decoded (translated) into the hierarchical features of a pre-trained deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features. Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that our method was able to reliably produce reconstructions that resembled the viewed natural images. A natural image prior introduced by a deep generator neural network effectively rendered semantically meaningful details to the reconstructions. Human judgment of the reconstructions supported the effectiveness of combining multiple DNN layers to enhance the visual quality of generated images. While our model was solely trained with natural images, it successfully generalized to artificial shapes, indicating that our model was not simply matching to exemplars. The same analysis applied to mental imagery demonstrated rudimentary reconstructions of the subjective content. Our results suggest that our method can effectively combine hierarchical neural representations to reconstruct perceptual and subjective images, providing a new window into the internal contents of the brain.Author summary: Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, prior work visualizing perceptual contents from brain activity has failed to combine visual information of multiple hierarchical levels. Here, we present a method for visual image reconstruction from the brain that can reveal both seen and imagined contents by capitalizing on multiple levels of visual cortical representations. We decoded brain activity into hierarchical visual features of a deep neural network (DNN), and optimized an image to make its DNN features similar to the decoded features. Our method successfully produced perceptually similar images to viewed natural images and artificial images (colored shapes and letters), whereas the decoder was trained only on an independent set of natural images. It also generalized to the reconstruction of mental imagery of remembered images. Our approach allows for studying subjective contents represented in hierarchical neural representations by objectifying them into images.

Date: 2019
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 06633&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1006633

DOI: 10.1371/journal.pcbi.1006633

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-03-19
Handle: RePEc:plo:pcbi00:1006633