Perez, Itzel Carolina Delgadillo
Thabet, Ali Kassem
KAUST DepartmentComputer Science Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Visual Computing Center (VCC)
Electrical Engineering Program
Permanent link to this recordhttp://hdl.handle.net/10754/660654
MetadataShow full item record
AbstractConvolutional Neural Networks have been very successful at solving a variety of computer vision tasks such as object classification and detection, semantic segmentation, activity understanding, to name just a few. One key enabling factor for their great performance has been the ability to train very deep networks. Despite their huge success in many tasks, CNNs do not work well with non-Euclidean data, which is prevalent in many real-world applications. Graph Convolutional Networks offer an alternative that allows for non-Eucledian data input to a neural network. While GCNs already achieve encouraging results, they are currently limited to architectures with a relatively small number of layers, primarily due to vanishing gradients during training. This work transfers concepts such as residual/dense connections and dilated convolutions from CNNs to GCNs in order to successfully train very deep GCNs. We show the benefit of using deep GCNs experimentally across various datasets and tasks. Specifically, we achieve promising performance in part segmentation and semantic segmentation on point clouds and in node classification of protein functions across biological protein-protein-interaction graphs. We believe that the insights in this work will open avenues for future research on GCNs and their application to further tasks not explored in this paper.
CitationLi, G., Mueller, M., Qian, G., Delgadillo Perez, I. C., Abualshour, A., Thabet, A. K., & Ghanem, B. (2021). DeepGCNs: Making GCNs Go as Deep as CNNs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. doi:10.1109/tpami.2021.3074057
SponsorsThe authors thank Adel Bibi and Hani Itani for their help with the project. This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding.
RelationsIs Supplemented By: