Locally application of random subspace with simple Bayesian classifier
Sotiris B. Kotsiantis
International Journal of Data Mining, Modelling and Management, 2009, vol. 1, issue 4, 375-392
Abstract:
Naive Bayes algorithm captures the assumption that every attribute is independent from the rest of the attributes, given the state of the class attribute. In this study, we attempted to increase the prediction accuracy of the simple Bayes model. Because the concept of combining classifiers is proposed as a new direction for the improvement of the performance of individual classifiers, we propose a technique of localised multiple simple Bayes models. The ensemble consists of multiple simple Bayes models constructed locally by pseudorandomly selecting subsets of components of the feature vector, that is, simple Bayes models constructed in randomly chosen subspaces. Finally, we performed a large-scale comparison with other attempts that have tried to improve the accuracy of the naive Bayes algorithm as well as other state-of-the-art algorithms and ensembles on 26 standard benchmark datasets and the proposed method gave better accuracy in most cases.
Keywords: naive Bayes classifiers; instance-based learner; classifier ensemble; random subspace. (search for similar items in EconPapers)
Date: 2009
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.inderscience.com/link.php?id=29032 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:ids:ijdmmm:v:1:y:2009:i:4:p:375-392
Access Statistics for this article
More articles in International Journal of Data Mining, Modelling and Management from Inderscience Enterprises Ltd
Bibliographic data for series maintained by Sarah Parker ().