EconPapers    
Economics at your fingertips  
 

Fusion and Allocation Network for Light Field Image Super-Resolution

Wei Zhang, Wei Ke (), Zewei Wu, Zeyu Zhang, Hao Sheng and Zhang Xiong
Additional contact information
Wei Zhang: The Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
Wei Ke: The Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
Zewei Wu: The Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
Zeyu Zhang: The Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
Hao Sheng: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing 100191, China
Zhang Xiong: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing 100191, China

Mathematics, 2023, vol. 11, issue 5, 1-21

Abstract: Light field (LF) images taken by plenoptic cameras can record spatial and angular information from real-world scenes, and it is beneficial to fully integrate these two pieces of information to improve image super-resolution (SR). However, most of the existing approaches to LF image SR cannot fully fuse the information at the spatial and angular levels. Moreover, the performance of SR is hindered by the ability to incorporate distinctive information from different views and extract informative features from each view. To solve these core issues, we propose a fusion and allocation network (LF-FANet) for LF image SR. Specifically, we have designed an angular fusion operator (AFO) to fuse distinctive features among different views, and a spatial fusion operator (SFO) to extract deep representation features for each view. Following these two operators, we further propose a fusion and allocation strategy to incorporate and propagate the fusion features. In the fusion stage, the interaction information fusion block (IIFB) can fully supplement distinctive and informative features among all views. For the allocation stage, the fusion output features are allocated to the next AFO and SFO for further distilling the valid information. Experimental results on both synthetic and real-world datasets demonstrate that our method has achieved the same performance as state-of-the-art methods. Moreover, our method can preserve the parallax structure of LF and generate faithful details of LF images.

Keywords: light field; super-resolution; interaction operator; distinctive information (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2023
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/11/5/1088/pdf (application/pdf)
https://www.mdpi.com/2227-7390/11/5/1088/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:11:y:2023:i:5:p:1088-:d:1076463

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:11:y:2023:i:5:p:1088-:d:1076463