Type
ArticleAuthors
Hu, PingPerazzi, Federico
Heilbron, Fabian Caba
Wang, Oliver
Lin, Zhe
Saenko, Kate
Sclaroff, Stan
Date
2020-11-20Online Publication Date
2020-11-20Print Publication Date
2021-01Submitted Date
2020-08-25Permanent link to this record
http://hdl.handle.net/10754/666065
Metadata
Show full item recordAbstract
Accurate semantic segmentation requires rich contextual cues (large receptive fields) and fine spatial details (high resolution), both of which incur high computational costs. In this paper, we propose a novel architecture that addresses both challenges and achieves state-of-the-art performance for semantic segmentation of high-resolution images and videos in real-time. The proposed architecture relies on our fast attention, which is a simple modification of the popular self-attention mechanism and captures the same rich contextual information at a small fraction of the computational cost, by changing the order of operations. Moreover, to efficiently process high-resolution input, we apply an additional spatial reduction to intermediate feature stages of the network with minimal loss in accuracy thanks to the use of the fast attention module to fuse features. We validate our method with a series of experiments, and show that results on multiple datasets demonstrate superior performance with better accuracy and speed compared to existing approaches for real-time semantic segmentation. On Cityscapes, our network achieves 74.4% mIoU at 72 FPS and 75.5% mIoU at 58 FPS on a single Titan X GPU, which is ~50% faster than the state-of-the-art while retaining the same accuracy.Citation
Hu, P., Perazzi, F., Heilbron, F. C., Wang, O., Lin, Z., Saenko, K., & Sclaroff, S. (2020). Real-time Semantic Segmentation with Fast Attention. IEEE Robotics and Automation Letters, 1–1. doi:10.1109/lra.2020.3039744Publisher
IEEEarXiv
2007.03815Additional Links
https://ieeexplore.ieee.org/document/9265219/https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9265219
http://arxiv.org/pdf/2007.03815
ae974a485f413a2113503eed53cd6c53
10.1109/LRA.2020.3039744