EconPapers    
Economics at your fingertips  
 

AIF: Infrared and Visible Image Fusion Based on Ascending–Descending Mechanism and Illumination Perception Subnetwork

Ying Liu (), Xinyue Mi, Zhaofu Liu and Yu Yao
Additional contact information
Ying Liu: The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Xinyue Mi: The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Zhaofu Liu: The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
Yu Yao: The School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China

Mathematics, 2025, vol. 13, issue 10, 1-23

Abstract: The purpose of infrared and visible image fusion is to generate a composite image that can contain both the thermal radiation profile information of the infrared image and the texture details of the visible image. This kind of composite image can be used to detect targets under various lighting conditions and offer high scene spatial resolution. However, the existing image fusion algorithms rarely consider light factor in the modeling process. The study presents a novel image fusion approach (AIF) that can adaptively fuse infrared and visible images under various lighting conditions. Specifically, the infrared image and the visible image are extracted by the AdC feature extractor, respectively, and both of them are adaptively fused under the guidance of the illumination perception subnetwork. The image fusion model is trained in an unsupervised manner with a customized loss function. The AdC feature extractor adopts an ascending–descending feature extraction mechanism to organize convolutional layers and combines these convolutional layers with cross-modal interactive differential modules to achieve the effective extraction of hierarchical complementary and differential information. The illumination perception subnetwork obtains the scene lighting condition based on the visible image, which determines the contribution weights of the visible image and the infrared image in the composite image. The customized loss function consists of illumination loss, gradient loss, and intensity loss. It is more targeted and can effectively improve the fusion effect of visible images and infrared images under different lighting conditions. Ablation experiments demonstrate the effectiveness of the loss function. We compare our method with nine other methods on public datasets, including four traditional methods and five deep-learning-based methods. Qualitative and quantitative experiments show that our method performs better in terms of indicators such as SD, and the fused image has more prominent contour information and richer detail information.

Keywords: image fusion; infrared image; feature extractor; illumination perception (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/10/1544/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/10/1544/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:10:p:1544-:d:1651447

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-05-09
Handle: RePEc:gam:jmathe:v:13:y:2025:i:10:p:1544-:d:1651447