RelTransformer: A Transformer-Based Long-Tail Visual Relationship Recognition

Abstract
The visual relationship recognition (VRR) task aims at understanding the pairwise visual relationships between interacting objects in an image. These relationships typically have a long-tail distribution due to their compositional nature. This problem gets more severe when the vocabulary becomes large, rendering this task very challenging. This paper shows that modeling an effective message-passing flow through an attention mechanism can be critical to tackling the compositionality and long-tail challenges in VRR. The method, called RelTransformer, represents each image as a fully-connected scene graph and restructures the whole scene into the relation-triplet and global-scene contexts. It directly passes the message from each element in the relation-triplet and global-scene contexts to the target relation via self-attention. We also design a learnable memory to augment the long-tail relation representation learning. Through extensive experiments, we find that our model generalizes well on many VRR benchmarks. Our model outperforms the best-performing models on two large-scale long-tail VRR benchmarks, VG8K-LT (+2.0% overall acc) and GQA-LT (+26.0% overall acc), both having a highly skewed distribution towards the tail. It also achieves strong results on the VG200 relation detection task.

Citation
Chen, J., Agarwal, A., Abdelkarim, S., Zhu, D., & Elhoseiny, M. (2022). RelTransformer: A Transformer-Based Long-Tail Visual Relationship Recognition. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/cvpr52688.2022.01890

Publisher
IEEE

Conference/Event Name
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022

DOI
10.1109/CVPR52688.2022.01890

arXiv
2104.11934

Additional Links
https://ieeexplore.ieee.org/document/9879211/

Relations
Is Supplemented By:

Permanent link to this record

Version History

Now showing 1 - 2 of 2
VersionDateSummary
2*
2022-11-30 09:58:07
Published as conference paper
2021-04-28 07:23:24
* Selected version