Show simple item record

dc.contributor.authorHuang, Jia Hong
dc.contributor.authorWu, Ting Wei
dc.contributor.authorWorring, Marcel
dc.date.accessioned2021-10-06T06:07:10Z
dc.date.available2021-10-06T06:07:10Z
dc.date.issued2021-08-21
dc.identifier.citationHuang, J.-H., Wu, T.-W., & Worring, M. (2021). Contextualized Keyword Representations for Multi-modal Retinal Image Captioning. Proceedings of the 2021 International Conference on Multimedia Retrieval. doi:10.1145/3460426.3463667
dc.identifier.isbn9781450384636
dc.identifier.doi10.1145/3460426.3463667
dc.identifier.urihttp://hdl.handle.net/10754/672161
dc.description.abstractMedical image captioning automatically generates a medical description to describe the content of a given medical image. Traditional medical image captioning models create a medical description based on a single medical image input only. Hence, an abstract medical description or concept is hard to be generated based on the traditional approach. Such a method limits the effectiveness of medical image captioning. Multi-modal medical image captioning is one of the approaches utilized to address this problem. In multi-modal medical image captioning, textual input, e.g., expert-defined keywords, is considered as one of the main drivers of medical description generation. Thus, encoding the textual input and the medical image effectively are both important for the task of multi-modal medical image captioning. In this work, a new end-to-end deep multi-modal medical image captioning model is proposed. Contextualized keyword representations, textual feature reinforcement, and masked self-attention are used to develop the proposed approach. Based on the evaluation of an existing multi-modal medical image captioning dataset, experimental results show that the proposed model is effective with an increase of +53.2% in BLEU-avg and +18.6% in CIDEr, compared with the state-of-the-art method. https://github.com/Jhhuangkay/Contextualized-Keyword-Representations-for-Multi-modal-Retinal-Image-Captioning
dc.description.sponsorshipThis work is supported by competitive research funding from King Abdullah University of Science and Technology (KAUST) and University of Amsterdam.
dc.publisherACM
dc.relation.urlhttps://dl.acm.org/doi/10.1145/3460426.3463667
dc.rightsArchived with thanks to ACM
dc.titleContextualized keyword representations for multi-modal retinal image captioning
dc.typeConference Paper
dc.conference.date2021-11-16 to 2021-11-19
dc.conference.name11th ACM International Conference on Multimedia Retrieval, ICMR 2021
dc.conference.locationTaipei, TWN
dc.eprint.versionPre-print
dc.contributor.institutionUniversity of Amsterdam, Amsterdam, Netherlands
dc.contributor.institutionGeorgia Institute of Technology, Atlanta, GA, USA
dc.identifier.pages645-652
dc.identifier.arxivid2104.12471
dc.identifier.eid2-s2.0-85114874649
dc.date.published-online2021-08-21
dc.date.published-print2021-08-24
dc.date.posted2021-04-26


This item appears in the following Collection(s)

Show simple item record