EconPapers    
Economics at your fingertips  
 

esFont: A guided diffusion and multimodal distillation to enhance the efficiency and stability in font design

Weijia Zhu, Xinjin Li, Jing Pu, Jin He and Jing Tan

PLOS ONE, 2025, vol. 20, issue 10, 1-18

Abstract: Font design is an area that presents a unique opportunity to blend artistic creativity and artificial intelligence. However, traditional methods are time-consuming, especially for complex fonts or large character sets. Font transfer streamlines this process by learning font transitions to generate multiple styles from a target font. Yet, existing Generative Adversarial Network (GAN) based approaches often suffer from instability. Current diffusion-based font generation methods typically depend on single-modal inputs, either visual or textual, limiting their capacity to capture detailed structural and semantic font features. Additionally, current diffusion models suffer from high computational complexity due to their deep and redundant architectures. To address these challenges, we propose esFont, a novel guided Diffusion framework. It incorporates a Contrastive Language–Image Pre-training (CLIP) based text encoder, and a Vision Transformer (ViT) based image encoder, enriching the font transfer process through multimodal guidance from text and images. Our model further integrates deep clipping and timestep optimization techniques, significantly reducing parameter complexity while maintaining superior performance. Experimental results demonstrate that esFont improves both efficiency and quality. Our model shows clear enhancements in structural accuracy (SSIM improved to 0.91), pixel-level fidelity (RMSE reduced to 2.68), perceptual quality aligned with human vision (LPIPS reduced to 0.07), and stylistic realism (FID decreased to 13.87). It reduces the model size to 100M parameters, cuts training time to just 1.3 hours, and lowers inference time to only 21 minutes. In summary, esFont achieves significant advancements in both scientific and engineering domains by the innovative combination of multimodal encoding, structural depth pruning, and timestep optimization.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0333496 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 33496&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0333496

DOI: 10.1371/journal.pone.0333496

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2025-10-11
Handle: RePEc:plo:pone00:0333496