Algorithmic Approach to the Identification of Classification Rules or Separation Surface for Spatial Data
Yee Leung ()
Additional contact information
Yee Leung: The Chinese University of Hong Kong
Chapter Chapter 4 in Knowledge Discovery in Spatial Data, 2010, pp 143-221 from Springer
Abstract:
Abstract As discussed in Chap. 3, naïve Bayes, LDA, logistic regression, and support vector machine are statistical or statistics related models developed for the classification of data. Breaking away from the statistical tradition is a number of classifiers which are algorithmic in nature. Instead of assuming a data model which is essential to the conventional statistical methods, these algorithmic classifiers attempt to work directly on the data without making any assumption about them. It has been regarded by many, particularly in the pattern recognition and artificial intelligence communities, as a more flexible approach to discover how data should be classified. Decision trees (or classification trees in the context of classification), neural networks, genetic algorithms, fuzzy sets, rough sets are typical paradigms. They are in general algorithmic in nature. In place of searching for a separation surface, like the statistical classifiers, some of these methods attempt to discover classification rules that can appropriately partition the feature space with reference to pre-specified classes. A decision tree is a segmentation of a training data set (Quinlan 1986; Friedman 1977). It is built by considering all objects as a single group, with the top node serving as the root of the tree. Training examples are then passed down the tree by splitting each intermediate node with respect to a variable. A decision tree is constructed when a certain stopping criterion is met. Each leaf, terminal, node of the tree contains a decision label, e.g., a class label. The decision tree partitions the feature space into sub-spaces corresponding to the leaves. Specifically, a decision tree that handles classification is known as a classification tree and a decision tree that solves regression problems is called a regression tree (Breiman et al. 1984). A decision tree that deals with both the classification and regression problems is referred to as a classification and regression tree (Breiman et al. 1984). Decision tree algorithms differ mainly in terms of their splitting and pruning strategies. They usually aim at the optimal partitioning of the feature space by minimizing the generalization error. The advantages of the decision tree approach are that it does not need any assumptions about the underlying distribution of the data, and it can handle both discrete and continuous variables. Furthermore, decision trees are easy to construct and interpret if they are of reasonable size and complexity. Their disadvantages are that splitting and pruning rules can be rather subjective. The theory is not as rigorous in terms of the statistical tradition. They also suffer from combinatorial explosion if the number of variables and their value labels are not appropriately controlled. Typical decision tree methods are ID3 (Quinlan 1986), C4.5 (Quinlan 1993), CART (Breiman et al. 1984), CHAID (Kass 1980), QUEST and newer versions, and FACT (Loh and Vanichsetakul 1988).
Date: 2010
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:adspcp:978-3-642-02664-5_4
Ordering information: This item can be ordered from
http://www.springer.com/9783642026645
DOI: 10.1007/978-3-642-02664-5_4
Access Statistics for this chapter
More chapters in Advances in Spatial Science from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().