EconPapers    
Economics at your fingertips  
 

DEF-Net: A dual-modal feature enhancement and fusion network for infrared and visible object detection

Xiaoming Guo, Fengbao Yang and Linna Ji

PLOS ONE, 2026, vol. 21, issue 4, 1-24

Abstract: Infrared-visible object detection in complex dynamic environments often suffers from weak feature representation and underutilized cross-modal complementarity, leading to missed and false detections. To address these issues, we propose a Dual-modal Enhanced Feature Enhancement and Fusion Network (DEF-Net). To enhance the model’s focus on informative features within both infrared and visible modalities, a feature interaction enhancement module is designed to effectively highlight and reinforce salient information. Furthermore, to better exploit the complementary characteristics of the two modalities, a transformer-based fusion architecture incorporating a cross-attention mechanism is introduced, enabling deep inter-modal feature integration. Experiments on SYUGV and LLVIP datasets show that DEF-Net outperforms existing methods in accuracy while maintaining real-time processing speed.

Date: 2026
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0345815 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 45815&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0345815

DOI: 10.1371/journal.pone.0345815

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2026-04-06
Handle: RePEc:plo:pone00:0345815