Show simple item record

dc.contributor.authorLiu, Yi-Chieh
dc.contributor.authorHsieh, Yung-An
dc.contributor.authorChen, Min-Hung
dc.contributor.authorYang, C.-H. Huck
dc.contributor.authorTegner, Jesper
dc.contributor.authorTsai, Y.-C. James
dc.date.accessioned2019-12-19T06:39:41Z
dc.date.available2019-12-19T06:39:41Z
dc.date.issued2020-04-09
dc.identifier.citationLiu, Y.-C., Hsieh, Y.-A., Chen, M.-H., Yang, C.-H. H., Tegner, J., & Tsai, Y.-C. J. (2020). Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). doi:10.1109/icassp40776.2020.9053783
dc.identifier.isbn978-1-5090-6632-2
dc.identifier.issn1520-6149
dc.identifier.doi10.1109/ICASSP40776.2020.9053783
dc.identifier.urihttp://hdl.handle.net/10754/660688
dc.description.abstractPerforming driving behaviors based on causal reasoning is essential to ensure driving safety. In this work, we investigated how state-of-the-art 3D Convolutional Neural Networks (CNNs) perform on classifying driving behaviors based on causal reasoning. We proposed a perturbation-based visual explanation method to inspect the models’ performance visually. By examining the video attention saliency, we found that existing models could not precisely capture the causes (e.g., traffic light) of the specific action (e.g., stopping). Therefore, the Temporal Reasoning Block (TRB) was proposed and introduced to the models. With the TRB models, we achieved the accuracy of 86.3%, which outperform the state-of-the-art 3D CNNs from previous works. The attention saliency also demonstrated that TRB helped models focus on the causes more precisely. With both numerical and visual evaluations, we concluded that our proposed TRB models were able to provide accurate driving behavior prediction by learning the causal reasoning of the behaviors.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.urlhttps://ieeexplore.ieee.org/document/9053783/
dc.relation.urlhttps://ieeexplore.ieee.org/document/9053783/
dc.relation.urlhttps://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9053783
dc.rightsArchived with thanks to IEEE
dc.subjectSelf-driving Vehicles
dc.subjectDriving Behaviors Reasoning
dc.subjectAction Recognition
dc.subjectSelf-attention Models
dc.subjectVideo Saliency
dc.titleInterpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding
dc.typeConference Paper
dc.contributor.departmentBiological and Environmental Sciences and Engineering (BESE) Division
dc.contributor.departmentBioscience Program
dc.conference.date4-8 May 2020
dc.conference.nameICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
dc.conference.locationBarcelona, Spain
dc.eprint.versionPost-print
dc.contributor.institutionGeorgia Institute of Technology,School of Civil and Environmental Engineering,Atlanta,GA,USA
dc.contributor.institutionGeorgia Institute of Technology,School of Electrical and Computer Engineering,Atlanta,GA,USA
dc.identifier.arxivid1911.02172
kaust.personTegner, Jesper
refterms.dateFOA2019-12-19T06:41:18Z
dc.date.published-online2020-04-09
dc.date.published-print2020-05
dc.date.posted2019-11-06


Files in this item

Thumbnail
Name:
Preprintfile1.pdf
Size:
756.2Kb
Format:
PDF
Description:
Pre-print

This item appears in the following Collection(s)

Show simple item record