Multi-pruning of decision trees for knowledge representation and classification
KAUST DepartmentApplied Mathematics and Computational Science Program
Computational Bioscience Research Center (CBRC)
Computer Science Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Office of the VP
Online Publication Date2016-06-09
Print Publication Date2015-11
Permanent link to this recordhttp://hdl.handle.net/10754/621280
MetadataShow full item record
AbstractWe consider two important questions related to decision trees: first how to construct a decision tree with reasonable number of nodes and reasonable number of misclassification, and second how to improve the prediction accuracy of decision trees when they are used as classifiers. We have created a dynamic programming based approach for bi-criteria optimization of decision trees relative to the number of nodes and the number of misclassification. This approach allows us to construct the set of all Pareto optimal points and to derive, for each such point, decision trees with parameters corresponding to that point. Experiments on datasets from UCI ML Repository show that, very often, we can find a suitable Pareto optimal point and derive a decision tree with small number of nodes at the expense of small increment in number of misclassification. Based on the created approach we have proposed a multi-pruning procedure which constructs decision trees that, as classifiers, often outperform decision trees constructed by CART. © 2015 IEEE.
CitationAzad M, Chikalov I, Hussain S, Moshkov M (2015) Multi-pruning of decision trees for knowledge representation and classification. 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). Available: http://dx.doi.org/10.1109/ACPR.2015.7486574.
Conference/Event name3rd IAPR Asian Conference on Pattern Recognition, ACPR 2015