Compressed Sensing and Dictionary Learning
Ke-Lin Du () and
M. N. S. Swamy
Additional contact information
Ke-Lin Du: Concordia University, Department of Electrical and Computer Engineering
M. N. S. Swamy: Concordia University, Department of Electrical and Computer Engineering
Chapter Chapter 18 in Neural Networks and Statistical Learning, 2019, pp 525-547 from Springer
Abstract:
Abstract Sparse coding is a matrix factorization technique. It models a target signal as a sparse linear combination of atoms (elementary signals) drawn from a dictionary (a fixed collection). Sparse coding has become a popular paradigm in signal processing, statistics, and machine learning. This chapter introduces compressed sensing, sparse representation/sparse coding, tensor compressed sensing, and sparse PCA.
Date: 2019
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-1-4471-7452-3_18
Ordering information: This item can be ordered from
http://www.springer.com/9781447174523
DOI: 10.1007/978-1-4471-7452-3_18
Access Statistics for this chapter
More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().