EconPapers    
Economics at your fingertips  
 

Integrating Traditional and Deep Cues for Depth from Focus Using Unfolding Networks

Muhammad Tariq Mahmood () and Khurram Ashfaq
Additional contact information
Muhammad Tariq Mahmood: Future Convergence Engineering, School of Computer Science and Engineering, Korea University of Technology and Education, 1600 Chungjeolro, Byeongcheonmyeon, Cheonan 31253, Republic of Korea
Khurram Ashfaq: Future Convergence Engineering, School of Computer Science and Engineering, Korea University of Technology and Education, 1600 Chungjeolro, Byeongcheonmyeon, Cheonan 31253, Republic of Korea

Mathematics, 2025, vol. 13, issue 22, 1-19

Abstract: Depth from focus (DFF) is an optical, passive method that perceives the dense depth map of a real-world scene by exploiting the focus cue through a focal stack, a sequence of images captured at different focal distances. In DFF methods, first, a focus volume is computed, which represents per-pixel focus quality across the focal stack, obtained either through a conventional focus metric or a deep encoder. Depth is then recovered by different strategies: Traditional approaches typically apply an argmax operation over the focus volume (i.e., selecting the image index with maximum focus), whereas deep learning-based methods often employ soft-argmax for direct feature aggregation. However, applying a simple argmax operation to extract depth from the focus volume often introduces artifacts and results in an inaccurate depth map. In this work, we propose a deep framework that integrates depth estimates from both traditional and deep learning approaches to produce an enhanced depth map. First, a deep depth module (DDM) extracts an initial depth map from deep multi-scale focus volumes. This estimate is subsequently refined through a depth unfolding module (DUM), which iteratively learns residual corrections to update the predicted depth. The DUM also incorporates structural cues from traditional methods, leveraging their strong spatial priors to further improve depth quality. Extensive experiments were conducted on both synthetic and real-world datasets. The results show that the proposed framework achieves improved performance in terms of root mean square error (RMS) and mean absolute error (MAE) compared to state-of-the-art deep learning and traditional methods. In addition, the visual quality of the reconstructed depth maps is noticeably better than that of other approaches.

Keywords: Depth from Focus; deep focus volume; depth map; focus measure; unfolding networks (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/22/3715/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/22/3715/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:22:p:3715-:d:1798274

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-11-25
Handle: RePEc:gam:jmathe:v:13:y:2025:i:22:p:3715-:d:1798274