EconPapers    
Economics at your fingertips  
 

Robust JND-Guided Video Watermarking via Adaptive Block Selection and Temporal Redundancy

Antonio Cedillo-Hernandez, Lydia Velazquez-Garcia, Manuel Cedillo-Hernandez (), Ismael Dominguez-Jimenez and David Conchouso-Gonzalez
Additional contact information
Antonio Cedillo-Hernandez: Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias, Av. Eugenio Garza Sada 2501 Sur, Col. Tecnologico, Monterrey 64700, Nuevo León, Mexico
Lydia Velazquez-Garcia: Instituto Politecnico Nacional, Centro de Investigaciones Economicas, Administrativas y Sociales, Lauro Aguirre 120, Agricultura, Ciudad de MeXico 11360, Mexico
Manuel Cedillo-Hernandez: Instituto Politecnico Nacional, Escuela Superior de Ingenieria Mecanica y Electrica, Unidad Culhuacan, Av. Santa Ana 1000, San Francisco Culhuacan, Coyoacan, Ciudad de Mexico 04440, Mexico
Ismael Dominguez-Jimenez: Universidad Autonoma del Estado de Hidalgo, Escuela Superior de Tlahuelilpan, Sergio Butron Casas 19, La Rancheria, Col. Centro, Tlahuelilpan 42780, Hidalgo, Mexico
David Conchouso-Gonzalez: Tecnologico de Monterrey, Escuela de Ingenieria y Ciencias, Av. Eugenio Garza Sada 2501 Sur, Col. Tecnologico, Monterrey 64700, Nuevo León, Mexico

Mathematics, 2025, vol. 13, issue 15, 1-24

Abstract: This paper introduces a robust and imperceptible video watermarking framework designed for blind extraction in dynamic video environments. The proposed method operates in the spatial domain and combines multiscale perceptual analysis, adaptive Just Noticeable Difference (JND)-based quantization, and temporal redundancy via multiframe embedding. Watermark bits are embedded selectively in blocks with high perceptual masking using a QIM strategy, and the corresponding DCT coefficients are estimated directly from the spatial domain to reduce complexity. To enhance resilience, each bit is redundantly inserted across multiple keyframes selected based on scene transitions. Extensive simulations over 21 benchmark videos (CIF, 4CIF, HD) validate that the method achieves superior performance in robustness and perceptual quality, with an average Bit Error Rate (BER) of 1.03%, PSNR of 50.1 dB, SSIM of 0.996, and VMAF of 97.3 under compression, noise, cropping, and temporal desynchronization. The system outperforms several recent state-of-the-art techniques in both quality and speed, requiring no access to the original video during extraction. These results confirm the method’s viability for practical applications such as copyright protection and secure video streaming.

Keywords: video watermarking; perceptual model; temporal redundancy; QIM; JND; blind extraction; copyright protection (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/15/2493/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/15/2493/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:15:p:2493-:d:1716470

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-08-09
Handle: RePEc:gam:jmathe:v:13:y:2025:i:15:p:2493-:d:1716470