EconPapers    
Economics at your fingertips  
 

An Efficient Method for Automatic Video Annotation and Retrieval in Visual Sensor Networks

Jiangfan Feng and Wenwen Zhou

International Journal of Distributed Sensor Networks, 2014, vol. 10, issue 3, 832512

Abstract: Automatic video annotation has become an important issue in visual sensor networks, due to the existence of a semantic gap. Although it has been studied extensively, semantic representation of visual information is not well understood. To address the problem of pattern classification in video annotation, this paper proposes a discriminative constraint to find a solution to approach the sparse representative coefficients with discrimination. We study a general method of discriminative dictionary learning which is independent of the specific dictionary and classifier learning algorithms. Furthermore, a tightly coupled discriminative sparse coding model is introduced. Ultimately, the experimental results show that the provided method offers a better video annotation method that cannot be achieved with existing schemes.

Date: 2014
References: Add references at CitEc
Citations:

Downloads: (external link)
https://journals.sagepub.com/doi/10.1155/2014/832512 (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:sae:intdis:v:10:y:2014:i:3:p:832512

DOI: 10.1155/2014/832512

Access Statistics for this article

More articles in International Journal of Distributed Sensor Networks
Bibliographic data for series maintained by SAGE Publications ().

 
Page updated 2025-03-19
Handle: RePEc:sae:intdis:v:10:y:2014:i:3:p:832512