Joint learning and weighting of visual vocabulary for bag-of-feature based tissue classification
KAUST DepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Computational Bioscience Research Center (CBRC)
Computer Science Program
Structural and Functional Bioinformatics Group
Permanent link to this recordhttp://hdl.handle.net/10754/563113
MetadataShow full item record
AbstractAutomated classification of tissue types of Region of Interest (ROI) in medical images has been an important application in Computer-Aided Diagnosis (CAD). Recently, bag-of-feature methods which treat each ROI as a set of local features have shown their power in this field. Two important issues of bag-of-feature strategy for tissue classification are investigated in this paper: the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting the inner relationship between the visual words and their weights. To overcome this problem, we develop a novel algorithm, Joint-ViVo, which learns the vocabulary and visual word weights jointly. A unified objective function based on large margin is defined for learning of both visual vocabulary and visual word weights, and optimized alternately in the iterative algorithm. We test our algorithm on three tissue classification tasks: classifying breast tissue density in mammograms, classifying lung tissue in High-Resolution Computed Tomography (HRCT) images, and identifying brain tissue type in Magnetic Resonance Imaging (MRI). The results show that Joint-ViVo outperforms the state-of-art methods on tissue classification problems. © 2013 Elsevier Ltd.
SponsorsThe study was supported by grants from National Key Laboratory for Novel Software Technology, China (Grant no. KFKT2012B17), 2011 Qatar Annual Research Forum Award (Grant no. ARF2011), and King Abdullah University of Science and Technology (KAUST), Saudi Arabia.