EconPapers    
Economics at your fingertips  
 

Measure of Similarity between GMMs by Embedding of the Parameter Space That Preserves KL Divergence

Branislav Popović, Lenka Cepova, Robert Cep, Marko Janev and Lidija Krstanović
Additional contact information
Branislav Popović: Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia
Lenka Cepova: Department of Machining, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, Assembly and Engineering Metrology, 17. listopadu 2172/15, 708 00 Ostrava Poruba, Czech Republic
Robert Cep: Department of Machining, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, Assembly and Engineering Metrology, 17. listopadu 2172/15, 708 00 Ostrava Poruba, Czech Republic
Marko Janev: Institute of Mathematics, Serbian Academy of Sciences and Arts, Kneza Mihaila 36, 11000 Belgrade, Serbia
Lidija Krstanović: Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovića 6, 21000 Novi Sad, Serbia

Mathematics, 2021, vol. 9, issue 9, 1-21

Abstract: In this work, we deliver a novel measure of similarity between Gaussian mixture models (GMMs) by neighborhood preserving embedding (NPE) of the parameter space, that projects components of GMMs, which by our assumption lie close to lower dimensional manifold. By doing so, we obtain a transformation from the original high-dimensional parameter space, into a much lower-dimensional resulting parameter space. Therefore, resolving the distance between two GMMs is reduced to (taking the account of the corresponding weights) calculating the distance between sets of lower-dimensional Euclidean vectors. Much better trade-off between the recognition accuracy and the computational complexity is achieved in comparison to measures utilizing distances between Gaussian components evaluated in the original parameter space. The proposed measure is much more efficient in machine learning tasks that operate on large data sets, as in such tasks, the required number of overall Gaussian components is always large. Artificial, as well as real-world experiments are conducted, showing much better trade-off between recognition accuracy and computational complexity of the proposed measure, in comparison to all baseline measures of similarity between GMMs tested in this paper.

Keywords: Gaussian mixture models; similarity measures; dimensionality reduction; KL-divergence (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2227-7390/9/9/957/pdf (application/pdf)
https://www.mdpi.com/2227-7390/9/9/957/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:9:y:2021:i:9:p:957-:d:543030

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:9:y:2021:i:9:p:957-:d:543030