Assessing Methods for Evaluating the Number of Components in Non-Negative Matrix Factorization
José M. Maisog,
Andrew T. DeMarco,
Karthik Devarajan,
Stanley Young,
Paul Fogel and
George Luta
Additional contact information
José M. Maisog: Blue Health Intelligence, Chicago, IL 60601, USA
Andrew T. DeMarco: Department of Rehabilitation Medicine, Georgetown University Medical Center, Washington, DC 20057, USA
Karthik Devarajan: Department of Biostatistics and Bioinformatics, Fox Chase Cancer Center, Temple University Health System, Philadelphia, PA 19111, USA
Stanley Young: GCStat, 3401 Caldwell Drive, Raleigh, NC 27607, USA
Paul Fogel: Advestis, 69 Boulevard Haussmann, 75008 Paris, France
George Luta: Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University Medical Center, Washington, DC 20057, USA
Mathematics, 2021, vol. 9, issue 22, 1-13
Abstract:
Non-negative matrix factorization is a relatively new method of matrix decomposition which factors an m × n data matrix X into an m × k matrix W and a k × n matrix H , so that X ? W × H . Importantly, all values in X , W , and H are constrained to be non-negative. NMF can be used for dimensionality reduction, since the k columns of W can be considered components into which X has been decomposed. The question arises: how does one choose k ? In this paper, we first assess methods for estimating k in the context of NMF in synthetic data. Second, we examine the effect of normalization on this estimate’s accuracy in empirical data. In synthetic data with orthogonal underlying components, methods based on PCA and Brunet’s Cophenetic Correlation Coefficient achieved the highest accuracy. When evaluated on a well-known real dataset, normalization had an unpredictable effect on the estimate. For any given normalization method, the methods for estimating k gave widely varying results. We conclude that when estimating k , it is best not to apply normalization. If the underlying components are known to be orthogonal, then Velicer’s MAP or Minka’s Laplace-PCA method might be best. However, when the orthogonality of the underlying components is unknown, none of the methods seemed preferable.
Keywords: non-negative matrix factorization; normalization; PCA; factorization rank; number of factored components; high-dimensional data; unsupervised learning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.mdpi.com/2227-7390/9/22/2840/pdf (application/pdf)
https://www.mdpi.com/2227-7390/9/22/2840/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:9:y:2021:i:22:p:2840-:d:675494
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().