Variance-Driven U-Net Weighted Training and Chroma-Scale-Based Multi-Exposure Image Fusion
Chang-Woo Son,
Young-Ho Go,
Seung-Hwan Lee and
Sung-Hak Lee ()
Additional contact information
Chang-Woo Son: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Young-Ho Go: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Seung-Hwan Lee: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Sung-Hak Lee: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Mathematics, 2025, vol. 13, issue 22, 1-32
Abstract:
Multi-exposure image fusion (MEF) aims to generate a well-exposed image by combining multiple photographs captured at different exposure levels. However, deep learning-based approaches are often highly dependent on the quality of the training data, which can lead to inconsistent color reproduction and loss of fine details. To address this issue, this study proposes a variance-driven hybrid MEF framework based on a U-Net architecture, which adaptively balances structural and chromatic information. In the proposed method, the variance of randomly cropped patches is used as a training weight, allowing the model to emphasize structurally informative regions and thereby preserve local details during the fusion process. Furthermore, a fusion strategy based on the geometric color distance, referred to as the Chroma scale, in the LAB color space is applied to preserve the original chroma characteristics of the input images and improve color fidelity. Visual gamma compensation is also employed to maintain perceptual luminance consistency and synthesize a natural fine image with balanced tone and smooth contrast transitions. Experiments conducted on 86 exposure pairs demonstrate that the proposed model achieves superior fusion quality compared with conventional and deep-learning-based methods, obtaining high JNBM (17.91) and HyperIQA (70.37) scores. Overall, the proposed variance-driven U-Net effectively mitigates dataset dependency and color distortion, providing a reliable and computationally efficient solution for robust MEF applications.
Keywords: chroma scale; HDR; multi-exposed image fusion; U-Net (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/22/3629/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/22/3629/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:22:p:3629-:d:1793127
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().