Type
Conference PaperAuthors
Newell, AlejandroDeng, Jia
KAUST Grant Number
OSR-2015-CRG4-2639Date
2017Permanent link to this record
http://hdl.handle.net/10754/626720
Metadata
Show full item recordAbstract
Graphs are a useful abstraction of image content. Not only can graphs represent details about individual objects in a scene but they can capture the interactions between pairs of objects. We present a method for training a convolutional neural network such that it takes in an input image and produces a full graph definition. This is done end-to-end in a single stage with the use of associative embeddings. The network learns to simultaneously identify all of the elements that make up a graph and piece them together. We benchmark on the Visual Genome dataset, and demonstrate state-of-the-art performance on the challenging task of scene graph generation.Sponsors
This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-2015-CRG4-2639.Conference/Event name
31st Annual Conference on Neural Information Processing Systems (NIPS)arXiv
1706.07365Additional Links
http://arxiv.org/abs/1706.07365v1http://arxiv.org/pdf/1706.07365v1