A Lagrange Programming Neural Network Approach with an ℓ 0 -Norm Sparsity Measurement for Sparse Recovery and Its Circuit Realization
Hao Wang,
Ruibin Feng,
Chi-Sing Leung (),
Hau Ping Chan and
Anthony G. Constantinides
Additional contact information
Hao Wang: College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China
Ruibin Feng: Department of Electrical Engineering, City University of Hong Kong, Hong Kong
Chi-Sing Leung: Department of Electrical Engineering, City University of Hong Kong, Hong Kong
Hau Ping Chan: Department of Electrical Engineering, City University of Hong Kong, Hong Kong
Anthony G. Constantinides: Department of Electrical and Electronic Engineering, Imperial College, London SW7 2BX, UK
Mathematics, 2022, vol. 10, issue 24, 1-22
Abstract:
Many analog neural network approaches for sparse recovery were based on using ℓ 1 -norm as the surrogate of ℓ 0 -norm. This paper proposes an analog neural network model, namely the Lagrange programming neural network with ℓ p objective and quadratic constraint (LPNN-LPQC), with an ℓ 0 -norm sparsity measurement for solving the constrained basis pursuit denoise (CBPDN) problem. As the ℓ 0 -norm is non-differentiable, we first use a differentiable ℓ p -norm-like function to approximate the ℓ 0 -norm. However, this ℓ p -norm-like function does not have an explicit expression and, thus, we use the locally competitive algorithm (LCA) concept to handle the nonexistence of the explicit expression. With the LCA approach, the dynamics are defined by the internal state vector. In the proposed model, the thresholding elements are not conventional analog elements in analog optimization. This paper also proposes a circuit realization for the thresholding elements. In the theoretical side, we prove that the equilibrium points of our proposed method satisfy Karush Kuhn Tucker (KKT) conditions of the approximated CBPDN problem, and that the equilibrium points of our proposed method are asymptotically stable. We perform a large scale simulation on various algorithms and analog models. Simulation results show that the proposed algorithm is better than or comparable to several state-of-art numerical algorithms, and that it is better than state-of-art analog neural models.
Keywords: analog neural networks; LPNN; optimization; real-time solution (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/10/24/4801/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/24/4801/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:24:p:4801-:d:1006090
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().