Dynamic Snake Convolution Neural Network for Enhanced Image Super-Resolution
Weiqiang Xin,
Ziang Wu,
Qi Zhu,
Tingting Bi,
Bing Li and
Chunwei Tian ()
Additional contact information
Weiqiang Xin: School of Software, Northwestern Polytechnical University, Xi’an 710129, China
Ziang Wu: School of Software, Northwestern Polytechnical University, Xi’an 710129, China
Qi Zhu: Key Laboratory of Brain-Machine Intelligence Technology, College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Ministry of Education, Nanjing 211106, China
Tingting Bi: School of Computing and Information Systems, University of Melbourne, Parkville 3010, Australia
Bing Li: School of Software, Northwestern Polytechnical University, Xi’an 710129, China
Chunwei Tian: Shenzhen Research Institute of Northwestern Polytechnical University, Northwestern Polytechnical University, Shenzhen 518057, China
Mathematics, 2025, vol. 13, issue 15, 1-19
Abstract:
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To address these limitations, this paper proposes DSCNN, a dynamic snake convolution neural network for enhanced image super-resolution. DSCNN optimizes feature extraction and network architecture to enhance both performance and efficiency: To improve feature extraction, the core innovation is a feature extraction and enhancement module with dynamic snake convolution that dynamically adjusts the convolution kernel’s shape and position to better fit the image’s geometric structures, significantly improving feature extraction. To optimize the network’s structure, DSCNN employs an enhanced residual network framework. This framework utilizes parallel convolutional layers and a global feature fusion mechanism to further strengthen feature extraction capability and gradient flow efficiency. Additionally, the network incorporates a SwishReLU-based activation function and a multi-scale convolutional concatenation structure. This multi-scale design effectively captures both local details and global image structure, enhancing SR reconstruction. In summary, the proposed DSCNN outperforms existing methods in both objective metrics and visual perception (e.g., our method achieved optimal PSNR and SSIM results on the Set5 × 4 dataset).
Keywords: SISR; dynamic convolution; multi-scale structure (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/15/2457/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/15/2457/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:15:p:2457-:d:1713508
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().