KAUST DepartmentMachine Intelligence & kNowledge Engineering Lab
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Computer Science Program
MetadataShow full item record
AbstractFor many textual collections, the number of features is often overly large. These features can be very redundant, it is therefore desirable to have a small, succinct, yet highly informative collection of features that describes the key characteristics of a dataset. Information theory is one such tool for us to obtain this feature collection. With this paper, we mainly contribute to the improvement of efficiency for the process of selecting the most informative feature set over high-dimensional unlabeled data. We propose a heuristic theory for informative feature set selection from high dimensional data. Moreover, we design data structures that enable us to compute the entropies of the candidate feature sets efficiently. We also develop a simple pruning strategy that eliminates the hopeless candidates at each forward selection step. We test our method through experiments on real-world data sets, showing that our proposal is very efficient. © 2012 IEEE.
Conference/Event name2012 IEEE 24th International Conference on Tools with Artificial Intelligence, ICTAI 2012