Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory
Type
Conference PaperKAUST Grant Number
KUS-C1-016-04Date
2013-05Permanent link to this record
http://hdl.handle.net/10754/599561
Metadata
Show full item recordAbstract
We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash. We apply an edge list partitioning technique, designed to accommodate high-degree vertices (hubs) that create scaling challenges when processing scale-free graphs. In addition to partitioning hubs, we use ghost vertices to represent the hubs to reduce communication hotspots. We present a scaling study with three important graph algorithms: Breadth-First Search (BFS), K-Core decomposition, and Triangle Counting. We also demonstrate scalability on BG/P Intrepid by comparing to best known Graph500 results. We show results on two clusters with local NVRAM storage that are capable of traversing trillion-edge scale-free graphs. By leveraging node-local NAND Flash, our approach can process thirty-two times larger datasets with only a 39% performance degradation in Traversed Edges Per Second (TEPS). © 2013 IEEE.Citation
Pearce R, Gokhale M, Amato NM (2013) Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory. 2013 IEEE 27th International Symposium on Parallel and Distributed Processing. Available: http://dx.doi.org/10.1109/IPDPS.2013.72.Sponsors
This work was partially performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344 (LLNL-CONF-588232). Funding was partially provided byLDRD 11-ERD-008. Portions of experiments were performed at the Livermore Computing facility resources. Thisresearch used resources of the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory, whichis supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357. ALCFresources provided through an INCITE 2012 award for the Fault-Oblivious Exascale Computing Environment project.This research supported in part by NSF awards CNS-0615267, CCF-0833199, CCF-0830753, IIS-0917266, IIS-0916053,NSF/DNDO award 2008-DN-077-ARI018-02, by DOE awards DE-FC52-08NA28616, DE-AC02-06CH11357, B575363,B575366, by THECB NHARP award 000512-0097-2009, by Samsung, Chevron, IBM, Intel, Oracle/Sun and by AwardKUS-C1-016-04, made by King Abdullah University of Science and Technology (KAUST). Pearce is supported in partby a Lawrence Scholar fellowship at LLNL.ae974a485f413a2113503eed53cd6c53
10.1109/IPDPS.2013.72