Introduction

Handle URI:
http://hdl.handle.net/10754/561675
Title:
Introduction
Authors:
Chikalov, Igor
Abstract:
Decision trees appeared in 50-60s of the last century in theoretical computer science [14, 64, 80] and applications [24, 37]. Similar objects are also considered by natural and social sciences, for example, taxonomy keys [30] or questionnaires [63]. Decision trees naturally represent identification and testing algorithms that specify the next test to perform based on the results of the previous tests. A number of particular formulations were generalized by Garey [27] as identification problem that is a problem of distinguishing objects described by a common set of attributes. More general formulation is provided by decision table framework [34, 65] where objects can have incomplete set of attributes and non-unique class labels. In that case, acquiring class label is enough to solve the problem: identifying a particular object is not required. In this context, decision trees found many applications in test theory [39, 45, 46, 81], fault diagnosis [14, 60, 72], rough set theory [61, 62], discrete optimization, non-procedural programming languages [34], analysis of algorithm complexity [38], computer vision [74], computational geometry [69]. © Springer-Verlag Berlin Heidelberg 2011.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Publisher:
Springer Science + Business Media
Journal:
Average Time Complexity of Decision Trees
Issue Date:
2011
DOI:
10.1007/978-3-642-22661-8_1
Type:
Article
ISSN:
18684394
ISBN:
9783642226601
Appears in Collections:
Articles; Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorChikalov, Igoren
dc.date.accessioned2015-08-03T09:02:02Zen
dc.date.available2015-08-03T09:02:02Zen
dc.date.issued2011en
dc.identifier.isbn9783642226601en
dc.identifier.issn18684394en
dc.identifier.doi10.1007/978-3-642-22661-8_1en
dc.identifier.urihttp://hdl.handle.net/10754/561675en
dc.description.abstractDecision trees appeared in 50-60s of the last century in theoretical computer science [14, 64, 80] and applications [24, 37]. Similar objects are also considered by natural and social sciences, for example, taxonomy keys [30] or questionnaires [63]. Decision trees naturally represent identification and testing algorithms that specify the next test to perform based on the results of the previous tests. A number of particular formulations were generalized by Garey [27] as identification problem that is a problem of distinguishing objects described by a common set of attributes. More general formulation is provided by decision table framework [34, 65] where objects can have incomplete set of attributes and non-unique class labels. In that case, acquiring class label is enough to solve the problem: identifying a particular object is not required. In this context, decision trees found many applications in test theory [39, 45, 46, 81], fault diagnosis [14, 60, 72], rough set theory [61, 62], discrete optimization, non-procedural programming languages [34], analysis of algorithm complexity [38], computer vision [74], computational geometry [69]. © Springer-Verlag Berlin Heidelberg 2011.en
dc.publisherSpringer Science + Business Mediaen
dc.titleIntroductionen
dc.typeArticleen
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.identifier.journalAverage Time Complexity of Decision Treesen
kaust.authorChikalov, Igoren
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.