EconPapers    
Economics at your fingertips  
 

Bridging tradition and technology: digital oil painting creation using advanced image processing techniques and generative adversarial network

Yongqing Wang () and Weina Yan ()
Additional contact information
Yongqing Wang: Leshan Normal University
Weina Yan: Leshan Normal University

Humanities and Social Sciences Communications, 2025, vol. 12, issue 1, 1-11

Abstract: Abstract The advancement of computational techniques has greatly influenced artistic practices, especially in replicating traditional oil painting digitally. While conventional image processing methods have achieved moderate success in emulating painterly effects, challenges remain in accurately reproducing the depth, brushwork dynamics, and chromatic complexity of classical oil media. To address these limitations, this study proposes a hybrid computational framework that integrates both image processing techniques and deep learning to transform photographs into visually authentic digital oil paintings. The pipeline begins with preprocessing and color anchoring using K-means clustering in the CIELAB (International Commission on Illumination) color space to simplify color regions while maintaining structural coherence. Edge detection using Sobel and Canny operators identifies contours and gradient orientations, guiding brushstroke placement. Stylization is applied through bilateral and Kuwahara filters, producing texture-smoothened yet edge-preserving visuals. A gradient-based directional stroke filter introduces Gaussian-weighted, orientation-aligned strokes to emulate the natural fluidity of oil paint. To further refine the output, a Style-Conditional Generative Adversarial Network (Style-GAN) is employed. Trained on a curated dataset of traditional oil paintings, this GAN module enhances texture realism, global consistency, and brushstroke fidelity through adversarial learning. To evaluate effectiveness, a dual-assessment approach was employed: expert artists qualitatively reviewed the outputs against traditional works, while structured Likert-scale surveys captured user perceptions of expressiveness, innovation, and fidelity. Results demonstrate that the GAN-integrated model significantly outperforms baseline methods, producing visually convincing and stylistically rich digital paintings. This study highlights how combining traditional techniques with generative models can bridge the aesthetic gap between digital simulation and classical art, contributing meaningfully to computational creativity.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://link.springer.com/10.1057/s41599-025-06162-3 Abstract (text/html)
Access to full text is restricted to subscribers.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:pal:palcom:v:12:y:2025:i:1:d:10.1057_s41599-025-06162-3

Ordering information: This journal article can be ordered from
https://www.nature.com/palcomms/about

DOI: 10.1057/s41599-025-06162-3

Access Statistics for this article

More articles in Humanities and Social Sciences Communications from Palgrave Macmillan
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-12-06
Handle: RePEc:pal:palcom:v:12:y:2025:i:1:d:10.1057_s41599-025-06162-3