Show simple item record

dc.contributor.authorHuck Yang, C. H.
dc.contributor.authorLiu, Fangyu
dc.contributor.authorHuang, Jia-Hong
dc.contributor.authorTian, Meng
dc.contributor.authorI-Hung Lin, M. D.
dc.contributor.authorLiu, Yi Chieh
dc.contributor.authorMorikawa, Hiromasa
dc.contributor.authorYang, Hao Hsiang
dc.contributor.authorTegner, Jesper
dc.date.accessioned2019-07-25T13:39:54Z
dc.date.available2019-07-25T13:39:54Z
dc.date.issued2019-06-19
dc.identifier.citationHuck Yang, C.-H., Liu, F., Huang, J.-H., Tian, M., I-Hung Lin, M. D., Liu, Y. C., … Tegnèr, J. (2019). Auto-classification of Retinal Diseases in the Limit of Sparse Data Using a Two-Streams Machine Learning Model. Lecture Notes in Computer Science, 323–338. doi:10.1007/978-3-030-21074-8_28
dc.identifier.doi10.1007/978-3-030-21074-8_28
dc.identifier.urihttp://hdl.handle.net/10754/656187
dc.description.abstractAutomatic clinical diagnosis of retinal diseases has emerged as a promising approach to facilitate discovery in areas with limited access to specialists. Based on the fact that fundus structure and vascular disorders are the main characteristics of retinal diseases, we propose a novel visual-assisted diagnosis hybrid model mixing the support vector machine (SVM) and deep neural networks (DNNs). Furthermore, we present a new clinical retina labels collection sorted by the professional ophthalmologist from the educational project Retina Image Bank, called EyeNet, for ophthalmology incorporating 52 retina diseases classes. Using EyeNet, our model achieves 90.40% diagnosis accuracy, and the model performance is comparable to the professional ophthalmologists (https://github.com/huckiyang/EyeNet2).
dc.publisherSpringer Nature
dc.relation.urlhttp://link.springer.com/10.1007/978-3-030-21074-8_28
dc.rightsThe final publication is available at Springer via 10.1007/978-3-030-21074-8_28
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/
dc.titleAuto-classification of Retinal Diseases in the Limit of Sparse Data Using a Two-Streams Machine Learning Model
dc.typeConference Paper
dc.contributor.departmentBiological and Environmental Sciences and Engineering (BESE) Division
dc.contributor.departmentBioscience
dc.contributor.departmentBioscience Program
dc.contributor.departmentEarth Science and Engineering
dc.contributor.departmentEarth Science and Engineering Program
dc.conference.date2018-12-02 to 2018-12-06
dc.conference.name14th Asian Conference on Computer Vision, ACCV 2018
dc.conference.locationPerth, WA, AUS
dc.eprint.versionPre-print
dc.contributor.institutionGeorgia Institute of Technology, Atlanta, GA, USA
dc.contributor.institutionUniversity of Waterloo, Waterloo, Canada
dc.contributor.institutionNational Taiwan University, Taipei, Taiwan
dc.contributor.institutionDepartment of Ophthalmology, Bern University Hospital, Bern, Switzerland
dc.contributor.institutionDepartment of Ophthalmology, Tri-Service General Hospital, Taipei, Taiwan
dc.contributor.institutionUnit of Computational Medicine, Center for Molecular Medicine, Department of Medicine, Karolinska Institutet, Solna, Sweden
dc.identifier.arxivid1808.05754
kaust.personHuck Yang, C. H.
kaust.personHuang, Jia-Hong
kaust.personMorikawa, Hiromasa
kaust.personTegner, Jesper
dc.relation.issupplementedbygithub:huckiyang/EyeNet2
refterms.dateFOA2019-12-01T13:47:24Z
display.relations<b>Is Supplemented By:</b><br/> <ul><li><i>[Software]</i> <br/> Title: huckiyang/EyeNet2: ACCV 18 - Auto-Classification of Retinal Diseases in the Limit of Sparse Data Using a Two-Streams Machine Learning Model. Publication Date: 2018-07-02. github: <a href="https://github.com/huckiyang/EyeNet2" >huckiyang/EyeNet2</a> Handle: <a href="http://hdl.handle.net/10754/668078" >10754/668078</a></a></li></ul>
dc.date.published-online2019-06-19
dc.date.published-print2019
dc.date.posted2018-11-01


Files in this item

Thumbnail
Name:
1808.05754.pdf
Size:
12.89Mb
Format:
PDF
Description:
Preprint

This item appears in the following Collection(s)

Show simple item record

The final publication is available at Springer via 10.1007/978-3-030-21074-8_28
Except where otherwise noted, this item's license is described as The final publication is available at Springer via 10.1007/978-3-030-21074-8_28