EconPapers    
Economics at your fingertips  
 

Night-to-Day Image Translation with Road Light Attention Training for Traffic Information Detection

Ye-Jin Lee, Young-Ho Go, Seung-Hwan Lee, Dong-Min Son and Sung-Hak Lee ()
Additional contact information
Ye-Jin Lee: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Young-Ho Go: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Seung-Hwan Lee: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Dong-Min Son: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
Sung-Hak Lee: School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea

Mathematics, 2025, vol. 13, issue 18, 1-33

Abstract: Generative adversarial networks (GANs)-based image deep learning methods are useful to improve object visibility in nighttime driving environments, but they often fail to preserve critical road information like traffic light colors and vehicle lighting. This paper proposes a method to address this by utilizing both unpaired and four-channel paired training modules. The unpaired module performs the primary night-to-day conversion, while the paired module, enhanced with a fourth channel, focuses on preserving road details. Our key contribution is an inverse road light attention (RLA) map, which acts as this fourth channel to explicitly guide the network’s learning. This map also facilitates a final cross-blending process, synthesizing the results from both modules to maximize their respective advantages. Experimental results demonstrate that our approach more accurately preserves lane markings and traffic light colors. Furthermore, quantitative analysis confirms that our method achieves superior performance across eight no-reference image quality metrics compared to existing techniques.

Keywords: cycle-consistent generative adversarial network (CycleGAN); four-channel paired training; L-channel; road light attention mask (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/18/2998/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/18/2998/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:18:p:2998-:d:1750915

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-10-04
Handle: RePEc:gam:jmathe:v:13:y:2025:i:18:p:2998-:d:1750915