EconPapers    
Economics at your fingertips  
 

Hazediff: A training-free diffusion-based image dehazing method with pixel-level feature injection

Xiaoxia Lin, Zhengao Li, Dawei Huang, Wancheng Feng, XinJun An, Lin Sun, Niuzhen Yu, Yan Li and Chunwei Leng

PLOS ONE, 2025, vol. 20, issue 10, 1-23

Abstract: In the current environmental context, significant emissions generated by industrial and transportation activities, coupled with an unreasonable energy structure, have resulted in recurrent haze phenomena. This consequently leads to degraded image contrast and reduced resolution in captured images, significantly hindering subsequent mid- and high-level visual tasks. These technical challenges have positioned image dehazing as a pivotal research frontier in computer vision. Nevertheless, current image dehazing approaches exhibit notable limitations. Deep learning-based methodologies demand extensive paired hazy-clean training datasets, the acquisition of which remains particularly challenging. Furthermore, synthetically generated data frequently exhibit marked disparities from authentic scenarios, thereby limiting model generalizability. Despite diffusion-based approaches demonstrating superior image reconstruction performance, their data-driven implementations face comparable limitations. To overcome these challenges, we propose HazeDiff: a training-free dehazing method based on the Diffusion model. This method provides a novel perspective for image dehazing research. Unlike existing approaches, it eliminates the need for hard-to-get paired training data, reducing computational costs while enhancing generalization. This not only reduces computational costs but also improves the generalization ability and stability on different datasets. Ultimately, it ensures that the dehazing restoration results are more reliable and effective. The Pixel-Level Feature Inject (PFI) we proposed is implemented through the self-attention layer. It integrates the pixel-level feature representation of the reference image into the initial noise of the dehazing image, effectively guiding the diffusion process to achieve the dehazing effect. As a supplement, the Structure Retention Model (SRM) incorporated in Cross-attention performs dynamic feature enhancement through adaptive attention re-weighting. This ensures the retention of key structural features during the restoration process while reducing detail loss. We have conducted comprehensive experiments on both real-world and synthetic datasets.Experimental results demonstrate that HazeDiff surpasses state-of-the-art dehazing methods, achieving higher scores on both no-reference (e.g., NIQE) and full-reference (e.g., PSNR) evaluation metrics. It shows stronger generalization ability and practicality. It can restore high-quality images with natural visual features and clear structural content from low-quality hazy images.

Date: 2025
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0329759 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 29759&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0329759

DOI: 10.1371/journal.pone.0329759

Access Statistics for this article

More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().

 
Page updated 2025-11-29
Handle: RePEc:plo:pone00:0329759