EconPapers    
Economics at your fingertips  
 

EyesGAN: Synthesize Human Face from Human Eyes

Xiaodong Luo () and Xiang Chen ()
Additional contact information
Xiaodong Luo: Sichuan Tourism University, School of Information and Engineering
Xiang Chen: Hunan University, College of Electrical and Information Engineering

Chapter Chapter 12 in Generative Machine Learning Models in Medical Image Computing, 2025, pp 231-251 from Springer

Abstract: Abstract Face recognition has achieved notable success across various domains, including mobile payment, authentication, criminal investigation, and urban management. Despite these advances, face occlusion remains a critical challenge in person identification, particularly in anti-terrorism efforts, criminal cases, and public security contexts. To address this issue, we introduce an enhanced deep generative adversarial network (EyesGAN) designed to synthesize human faces from eye images, offering a promising approach for masked face recognition. BicycleGAN is chosen as the baseline and effective improvements have been achieved. First, the self-attentional mechanism is introduced so that the improved model can more effectively learn about the internal mapping between human eyes and face. Second, the perceptual loss is applied to guide the model cyclic training and help with updating the network parameters so that the synthesized face can be of higher-similarity to the ground truth face. Third, EyesGAN has been designed by getting the utmost out of the performance of the perceptual loss and the self-attentional mechanism in GANs. To train and evaluate EyesGAN, we have reconstructed a dataset for eyes-to-face synthesis, leveraging public face datasets. The synthesized faces generated by EyesGAN have been rigorously compared with existing methods, both quantitatively and qualitatively. Extensive experiments demonstrate that our method outperforms state-of-the-art techniques across multiple metrics including Average Euclidean Distance, Average Cosine Similarity, Synthesis Accuracy Percentage, Fréchet Inception Distance. Notably, we achieved a Baidu face recognition rate of 96.1% on 615 test samples from the CelebA database. This study explores the feasibility of facial synthesis from eye images, with the attention map indicating that our network can accurately predict other facial regions based on the eyes alone. Furthermore, we extend our investigation to assess the performance of our proposed method in the recovery of noisy X-ray images. Our approach successfully synthesizes high-quality images that demonstrate a high degree of consistency with the corresponding ground truth images, underscoring its potential for enhancing image quality in medical imaging applications.

Date: 2025
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-031-80965-1_12

Ordering information: This item can be ordered from
http://www.springer.com/9783031809651

DOI: 10.1007/978-3-031-80965-1_12

Access Statistics for this chapter

More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-12-11
Handle: RePEc:spr:sprchp:978-3-031-80965-1_12