WO2023184836A1 - 基于核间存储访问的子图分段优化方法及应用 - Google Patents

基于核间存储访问的子图分段优化方法及应用 Download PDF

Info

Publication number
WO2023184836A1
WO2023184836A1 PCT/CN2022/114568 CN2022114568W WO2023184836A1 WO 2023184836 A1 WO2023184836 A1 WO 2023184836A1 CN 2022114568 W CN2022114568 W CN 2022114568W WO 2023184836 A1 WO2023184836 A1 WO 2023184836A1
Authority
WO
WIPO (PCT)
Prior art keywords
vertex
graph
vertices
subgraph
target
Prior art date
Application number
PCT/CN2022/114568
Other languages
English (en)
French (fr)
Inventor
曹焕琦
王元炜
Original Assignee
深圳清华大学研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳清华大学研究院 filed Critical 深圳清华大学研究院
Publication of WO2023184836A1 publication Critical patent/WO2023184836A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24532Query optimisation of parallel queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists

Definitions

  • the present invention generally relates to three types of vertex degree-aware 1.5-dimensional graph partitioning methods and applications, and more specifically to large-scale graph computing methods, distributed parallel computing systems and computer-readable media.
  • the graph computing framework is a general programming framework used to support graph computing applications.
  • Sunway supercomputer a new generation of "Shentu" ultra-large-scale graph computing framework is provided to support large-scale graph computing applications with the largest machine scale, tens of trillions of vertices, and three hundred billion edges.
  • Graph computing applications are data-intensive applications that rely on data consisting of vertices and edges connecting two vertices.
  • Typical applications include PageRank for web page importance ranking, breadth-first search (BFS) for graph traversal, label propagation for weakly connected component (WCC) solving of graphs, single source shortest path (SSSP), etc.
  • the one-dimensional partitioning method is a point-centered partitioning method. According to this method, the vertices in the data graph are evenly divided into different machines, and each vertex and all its adjacent edges are stored together, so that heavy vertices (out Vertices with high degree or in-degree) will set up agents at many nodes.
  • the two-dimensional partitioning method is an edge-based partitioning method. Different from the one-dimensional partitioning, the two-dimensional partitioning evenly distributes the edges (rather than points) in the graph to each computing node to achieve load balancing.
  • the two-dimensional partitioning is equivalent to dividing the vertex at the vertex. Deploy the agent on the row and column.
  • the load of graph data is seriously unbalanced, which is manifested in the severe imbalance of edges at different vertices, and the degrees of different vertices are very different.
  • both one-dimensional and two-dimensional partitioning will face scalability problems.
  • One-dimensional vertices The partitioning method will cause too many heavy vertices to deploy near-global agents, and the two-dimensional vertex partitioning method will cause too many vertices to deploy agents on rows and columns.
  • a graph computing method based on distributed parallel computing including: obtaining data of a graph to be calculated.
  • the graph includes a plurality of vertices and edges, where each vertex represents a corresponding operation, where each An edge connects a corresponding first vertex to a corresponding second vertex, the operation represented by the corresponding second vertex receives as input the output of the operation represented by the corresponding first vertex, and the edge X ⁇ Y represents from vertex X to vertex Y
  • the edges of the graph are stored in a compressed sparse row format and stored in a sparse matrix form, in which all adjacent edges of the same vertex are stored continuously, supplemented by an offset array to support its indexing function; for the degrees of the source vertex and the target vertex For subgraphs that are both greater than a predetermined threshold, the forward graph is segmented according to the target vertices, that is, the columns of the matrix, and the reverse graph is segmented according to the source vertices,
  • the graph computing method is performed on a general-purpose multi-core processor, and the predetermined range is the size that can be stored in the last level cache LLC (Last Level Cache).
  • Last Level Cache Last Level Cache
  • graph computing performs edge processing in pull mode on heterogeneous many-core processors, using the remote access mechanism of local data memory LDM (Local Data Memory) to cut the source vertices of a segmented subgraph SSG in the reverse graph.
  • LDM Local Data Memory
  • the points are stored in an LDM of the slave core array, and then during the graph traversal process, the slave core accesses the RLD (Remote Load) data from the target vertex to obtain the data from the slave core.
  • RLD Remote Load
  • the source vertices whose degree exceeds a predetermined threshold are numbered according to the following rules: from high to low: segment number; cache line number; slave core number, slave core local number.
  • the number of digits in the cache line number is calculated based on the sum of the sizes of the vertex data elements to ensure that the local data memory LDM can store all of them; the next six bits are used to distinguish the slave core number to which it belongs; the number of digits in the slave core local number is based on the vertex data Element size minimum is calculated to ensure DMA efficiency.
  • each slave core first uses the direct memory access DMA operation with stride to prefetch the vertex data in the segment according to the slave core number bit; in the edge pull mode During the processing, when reading the source vertex, use the local data memory remote load (LDM RLD) to read the LDM corresponding address of the slave core to which the source vertex belongs.
  • LDM RLD local data memory remote load
  • a distributed parallel computing system which has multiple super nodes.
  • Each super node is composed of multiple nodes.
  • Each node is a computing device with computing capabilities.
  • the nodes inside the super node The communication speed is faster than the communication speed between nodes across super nodes.
  • the nodes are divided into grids, with one node in each grid.
  • the internal nodes of a super node are logically arranged in a row.
  • the distributed parallel computing system stores the graph according to the following rules and perform graph computation: obtain data of a graph to be computed, the graph includes a plurality of vertices and edges, wherein each vertex represents a corresponding operation, wherein each edge connects a corresponding first vertex to a corresponding second vertex, the corresponding second
  • the operation represented by the vertex receives as input the output of the operation represented by the corresponding first vertex.
  • the edge X ⁇ Y represents the edge from vertex X to vertex Y.
  • the graph is stored in a compressed sparse row format and is stored in sparse matrix form. Among them, all adjacent edges of the same vertex are stored continuously, supplemented by an offset array to support its indexing function.
  • the target vertex is the matrix.
  • the column is segmented, and the reverse graph is segmented according to the source vertex, that is, the row of the matrix, and a subgraph is further divided into multiple segmented subgraph SSG components, so that the target vertex in each segmented subgraph SSG is limited to a predetermined range.
  • a computer-readable medium on which instructions are stored, and when executed by a distributed parallel computing system, the instructions execute the aforementioned graph computing method.
  • the graph (sparse matrix) is segmented according to the target vertex (column of the matrix) through the segmented subgraph method for high-degree vertices in this embodiment, and a graph is divided into multiple SSG components, so that the target vertex in each SSG Limiting it to a certain range avoids or alleviates the traditional problem of very poor spatiotemporal locality of accessing data of neighboring vertices due to the excessive range of vertices of a single large graph.
  • FIG. 1 shows a schematic architectural diagram of a supercomputer Shenwei equipped with super nodes that can be applied to implement graph data calculation according to an embodiment of the present invention.
  • Figure 2 shows a schematic diagram of a graph partitioning strategy according to an embodiment of the present invention.
  • Figure 3 shows a schematic diagram of the edge number distribution of sub-graphs at the scale of the whole machine in the 1.5-dimensional division according to the embodiment of the present invention.
  • Figure 4 shows a schematic diagram of traditional graph storage in the form of a sparse matrix (left part of Figure 4) and segmented subgraph storage optimization according to an embodiment of the present invention (right part of Figure 4).
  • Figure 5 shows a schematic diagram of processing of reduction distribution type communication according to an embodiment of the present invention.
  • Figure 6 schematically shows a flow chart of the process of classifying slave cores, distributing data within a core group, and writing to external buckets according to an embodiment of the present invention.
  • Each node is a computing device with computing capabilities.
  • Supernode A supernode is composed of a predetermined number of nodes. The communication between nodes within the supernode is physically faster than the communication between nodes across the supernode.
  • Maintaining vertex status When related edge processing needs to be performed, if the associated vertex does not belong to the local machine, the vertex status data will be synchronized from the node to which the vertex belongs through network communication to establish a delegate, so that the edge update operation can be directly Operate local proxies without having to access remote vertex primitives.
  • Delegate of the vertex When the associated vertex does not belong to the local machine, the copy established through communication and responsible for the local update of the edge is called the delegate of the vertex.
  • An agent can be used to proxy outgoing edges and proxy incoming edges: when the proxy goes out of an edge, the source vertex synchronizes the current state to its agent through communication; when the proxy enters an edge, the agent first merges part of the edge messages locally, and then merges the merged Messages are sent to target vertices via communication.
  • CPE Computational processing unit
  • the graph data in this article is, for example, social networks, web page data, knowledge graphs, and other graphs that are recorded, stored, and processed in the form of data.
  • the size of a graph can, for example, reach tens of trillions of vertices and hundreds of billions of edges.
  • FIG. 1 shows a schematic architectural diagram of a supercomputer Shenwei equipped with super nodes that can be applied to implement graph data calculation according to an embodiment of the present invention.
  • MPE represents the management processing unit (referred to as the main core)
  • NOC represents the network on chip
  • CPE represents the computing processing unit (referred to as the slave core)
  • LDM represents the local data storage
  • NIC represents the interconnection network card
  • LDM represents the local data memory (Local Data Memory).
  • MC stands for memory controller.
  • the graph includes a plurality of vertices and edges, wherein each vertex represents a corresponding operation, and wherein each edge connects a corresponding first vertex to a corresponding second vertex, the operation represented by the corresponding second vertex receives the operation performed by the corresponding first vertex. represents the output of the operation as input, edge X ⁇ Y represents the edge from vertex X to vertex Y.
  • the oversubscribed fat tree is an interconnection network topology.
  • Each super node is connected to each other and communicates through the central switching network of the oversubscribed fat tree.
  • the processors in each super node communicate through local Communication is achieved over a fast network; in contrast, communication across supernodes requires pruning fat trees, so the bandwidth is lower and the latency is higher.
  • a hybrid dimension division method based on three types of degree vertices is provided.
  • the vertex set is divided into extreme heights (E for Extreme, for example, the degree is greater than the total number of nodes), heights (H for High, for example, the degree is greater than the number of nodes in the super node) and regular vertices (R for Regular).
  • R-type vertices are also called L-type vertices (L for Low, that is, vertices with relatively low degrees); for directed
  • the graph is divided according to in-degree and out-degree respectively.
  • the in-degree partition set and the out-degree partition set are marked as Xi and Xo respectively, where X is E, H or R.
  • a super node is composed of a predetermined number of nodes.
  • the communication speed between nodes within the super node is faster than the communication speed between nodes across the super node.
  • the nodes are divided into grids, with one node in each grid and one node in each grid.
  • the internal nodes of the super node are logically arranged in a row, or a row is mapped to the super node.
  • the vertices are evenly divided into each node according to the number. Ri and Ro are maintained by the node to which they belong.
  • the Ho vertex status is maintained synchronously on the column. On the column and Hi vertex status is maintained synchronously on the row, and Eo and Ei vertex status are synchronously maintained globally.
  • Eo and Ho are collectively called EHo, which means Eo and Ho
  • Ei and Hi are collectively called EHi, which means Ei and Hi.
  • node classification mark the vertices with degree greater than the first threshold as E and divide them into extremely high degree classes. Mark the vertices with degrees between the first threshold and the second threshold as H and divide them into high degree classes. Vertices lower than the second threshold are marked R and classified into the normal class, and the first threshold is greater than the second threshold.
  • the threshold here can be set as needed.
  • the first threshold is the total number of nodes, which means that vertices with a degree greater than the total number of nodes belong to the extremely high degree category
  • the second threshold is the number of nodes within the supernode, also That is, vertices whose degree is greater than the number of nodes in a supernode but less than the total number of nodes are classified into the high degree class, while vertices whose degree is less than the number of nodes in a supernode are classified into the normal class.
  • the first threshold is the total number of nodes multiplied by a constant factor
  • the second threshold is the number of nodes within the supernode multiplied by a second constant factor.
  • Agents are arranged on the columns for the Ho vertices, and messages are sent along the rows; agents are arranged on the rows and columns for the Hi vertices, and depending on the subgraph to which the edge belongs, the agents on the rows or columns are selected to merge messages. and then sent to the target vertex.
  • vertices with medium degrees are generally connected to a larger number of edges such that the other end covers all supernodes. There are obviously more H vertices than E vertices. If global agents are arranged for H vertices like E vertices, it will cause a lot of unnecessary communication.
  • No proxy is set for R vertices. As vertices have low enough degree, there is little gain in assigning agents to them, while requiring time and space to manage the agents. Therefore, proxies are not set for R vertices, but remote edges are used to connect to other vertices. When accessing these edges, messages need to be passed on each edge: when the Ro vertex wants to update a neighbor vertex, it sends its own number and edge message to the target vertex or its incoming edge agent through network communication to achieve the update; the Ri vertex needs to When accepting the update request of a neighbor vertex, the neighbor (source) vertex sends the source number and edge message to the Ri vertex through network communication from itself or through the outgoing edge agent to implement the update.
  • Figure 2 shows a schematic diagram of a graph partitioning strategy according to an embodiment of the present invention.
  • the allocation and storage of vertex-corresponding agents are performed according to edge conditions.
  • the X2Y form in Figure 2 and other graphs represents the edge from the source vertex X to the target vertex Y, and 2 represents the English to.
  • this type of edge is stored in the grid column where the source vertex node is located and the grid row node where the target vertex node is located, that is, the subgraph inside the height vertex is divided into two dimensions, and the minimum Reduce communication traffic. For example, as shown in node 01 shown in Figure 2.
  • edge Eo ⁇ Ri this type of edge is stored on the node where the R vertex is located to allow global maintenance of extremely high-degree vertices during calculation.
  • the vertex state is first sent to the agent of the Eo vertex, and then the Ri vertex on the local machine is updated through the agent. , reduce the amount of messages that need to be communicated.
  • edge E->L2 shown in node 11 of Figure 2.
  • this type of edge is stored in the grid column where the H vertex is located, and the node in the grid row where the R vertex is located, to allow the height vertices to be maintained on the columns during calculation, and across the super
  • the communication messages of the node are aggregated in the communication.
  • the message needs to be sent to multiple target vertices of the same super node, because the source vertex has an available proxy node in its own column and the row of the target, so A message is first sent across the supernode to the available agent node, and then the available agent node sends it to all target vertices through the fast network within the supernode.
  • this type of edge is stored on the node where the R vertex is located to allow global maintenance of extremely high degree vertices during computation and aggregation of their communication messages; for example, node 11 in Figure 2.
  • this type of edge is stored in the node where the R vertex is located, but the H vertex is stored according to the number on the column, so as to allow the calculation to accumulate its update messages on the nodes in the row where R is located and the column where H is located, and the aggregation spans Network communication of super nodes; for example, the communication of L1-H in node 01 in Figure 2.
  • edge Ro ⁇ Ri forward and reverse edges are stored at the source and target vertices respectively, and edge messages are implemented by forwarding rows and columns on the communication grid.
  • the above three types of vertex and six types of edge division can effectively solve the strong load skew caused by power law distribution: the edges associated with the height-numbered vertices will be stored according to the two-dimensional division or the one-dimensional division on the opposite side, depending on the situation. , the number of edges is maintained globally evenly, thereby ensuring dual balance of storage and load. Such a division can solve the problem of extremely large vertices in extreme graphs.
  • the processing of H vertex sets is much more efficient than the processing of simple one-dimensional division of height vertices.
  • the Ho vertex that originally needs to send messages on each edge will send messages to multiple nodes in the same super node, thus consuming the top-level network bandwidth; after this optimization, the message will only be sent once to each super node, thus saving Top-level network.
  • the message since it is only sent once to one agent node in the target super node, a large amount of useless communication and additional storage will not be introduced.
  • the edge number distribution of the subgraph at the scale of the whole machine is shown in Figure 3 (drawn according to the cumulative distribution function).
  • the range that is, the relative difference between the maximum value and the minimum value deviation
  • the range does not exceed 5% in EH2EH and only 0.35% in other subfigures.
  • E and H have agents on columns and rows, a subgraph whose ends are E or H (sometimes, this article calls it EH2EH, or EH->EH) is 2D partitioned.
  • the two subgraphs from E to L (E2L) and from L to E (L2E) are attached to the owner of L, just like heavy vertex (heavy vertex) and heavy delegate (heavy delegate) in 1D partitioning.
  • the method of the embodiment of the present invention degenerates into a one-dimensional partition similar to a dense delegate (heavy delegate). The difference is that in the present invention Edges between heavy vertex are partitioned in two dimensions. Compared to one-dimensional partitioning with dense representation, the inventive method further isolates height-number vertices into two levels, resulting in topology-aware data partitioning. It preserves enough heavy vertices to allow for better early exit in directional optimizations. It also avoids globally sharing the communication of all these heavy vertices by setting up proxies for H only on rows and columns.
  • sub-iteration adaptive direction selection to support fast exit is also proposed.
  • an embodiment of sub-iteration adaptive direction selection is proposed. Specifically, two traversal modes of "push” and “pull” are implemented, and the traversal mode is automatically selected according to the proportion of active vertices. More specifically, for the three types of locally calculated edges EHo ⁇ EHi, Eo ⁇ Ri, and Ro ⁇ Ei , select “pull” or “push” mode according to whether the activity ratio of the target vertex exceeds the configurable threshold; for the three types of locally calculated edges: Ho ⁇ Ri, Ro ⁇ Hi, Ro ⁇ Ri, according to the active ratio of the target vertex and Select “Pull” or “Push” mode if the ratio of active proportions of source vertices exceeds a configurable threshold.
  • the graph traversal is performed by "pulling" from the target vertex to the source vertex; when it is judged that the iteration is in the middle, the source vertex is used.
  • Graph traversal by "pushing" vertices to target vertices;
  • the vertex In the pull mode, set the status mark of the vertex to determine whether the status of a vertex has reached the dead state. In the dead state, the vertex will not respond to more messages; after confirming that the dead state has been reached, the rest will be skipped. Message processing.
  • the following code provides a quick exit interface to support performance optimization of traversal algorithms: users can pass in the parameter fcond to determine whether a message will still be updated to skip the remaining messages.
  • the traversal direction can be adaptively switched to avoid useless calculations, minimize communication volume, and optimize graph computing performance.
  • CSR Compressed Sparse Row
  • the following segmented subgraph method is proposed: for both the source vertex and the target vertex belong to a subgraph (sparse matrix) of high degree, forward
  • the graph is segmented according to the target vertices (columns of the matrix), and the reverse graph is segmented according to the source vertices (rows of the matrix)
  • the forward graph refers to the directed graph from the source vertex to the target vertex
  • the reverse graph refers to the directed graph from the source vertex to the target vertex.
  • the graph from the target vertex to the source vertex divides a graph into multiple SSG components, so that the target vertices in each SSG are limited to a certain range.
  • this range is generally chosen to be a size that can be stored in the Last Level Cache (LLC).
  • LLC Last Level Cache
  • Figure 4 shows traditional graph storage in the form of a sparse matrix (left part of the figure) and segmented subgraph storage optimization according to an embodiment of the present invention (right part of Figure 4).
  • the remote access mechanism of local data memory LDM was chosen to replace fast random target access on the LLC.
  • LDM Local Data Memory
  • the target vertex of an SSG is segmented and stored in an LDM of the slave core array, and then during the graph traversal process, the slave core accesses the RLD (Remote Load) from the slave core (that is, the slave core to which the data of the target vertex belongs. )retrieve data.
  • RLD Remote Load
  • the target vertices of the first extremely high degree class and the second high degree class are numbered according to the following rules: from high bit to low bit, they are segment number, cache line number, slave core number and slave core local number. .
  • each slave core can first use a DMA operation with stride to prefetch the vertex data within the segment according to the slave core number bit; during edge processing in pull mode, read the first
  • the local data memory is used to remotely load the LDM RLD to read the corresponding address of the LDM of the slave core to which the source vertex belongs.
  • the graph (sparse matrix) is segmented according to the target vertex (column of the matrix), and a graph is divided into multiple SSG components, so that the target vertex in each SSG is limited Within a certain range, it avoids or alleviates the traditional problem of very poor spatiotemporal locality of data accessing neighboring vertices due to the excessive range of vertices of a single large graph.
  • the segmented subgraph method of the third embodiment is not limited to the 1.5-dimensional graph dividing method of the first embodiment, but can be applied to a subgraph composed of source vertices and target vertices that are relative height vertices divided according to any other criteria .
  • the reduction of vertex data is a custom function provided by the user and cannot be mapped to the reduction operator of MPI.
  • the reduction process of high-level vertex data and the calculations required for them require a large loop merging space, and the amount of memory access involved is also relatively significant.
  • This embodiment uses a slave core group to optimize this communication process, and the optimization process can be embedded Carry out in the processing of EH subgraph.
  • the following optimization processing is performed: for the reduction on rows and columns, use Ring algorithm; for global reduction, the message is first transposed locally to change the data from row-major to column-major, and then the reduction distribution on the row is first called, and then the reduction distribution on the column is called to reduce the Cross-supernode communication on columns.
  • Figure 5 shows a schematic diagram of processing of reduction distribution type communication according to an embodiment of the present invention.
  • Hierarchical reduction distribution eliminates redundant data within supernodes in the first step of reduction, making communication across supernodes in the second step minimal and redundant.
  • the set communication method of the fourth embodiment is not limited to graph computation using the 1.5-dimensional graph partitioning method of the first embodiment, but can be applied to source vertices and target vertices that are relative height vertices divided according to any other criteria. composed of subgraphs.
  • this implementation plan is based on the idea of lock-free data distribution in the core group and carries out a new design on the new many-core platform architecture. It will classify the cores according to their functions and utilize RMA (Remote Memory Access). ) operation first performs a round of data distribution within the core group, and then performs memory access for writing to external buckets.
  • RMA Remote Memory Access
  • the process is performed on a heterogeneous many-core processor.
  • the message generation part and the message rearrangement part use bucketing steps to distribute the messages to the target nodes;
  • the core is divided into 2 categories by row, producers and consumers.
  • the producer obtains data from the memory and performs data processing at the current stage. If there is data generated, it is placed according to the target address to the consumer who should process the message.
  • the sending buffer when the buffer is full, it is passed to the corresponding consumer through RMA.
  • the consumer receives the message passed by RMA and performs subsequent operations that require mutual exclusion (such as appending to the end of the communication buffer in main memory, or updating vertex status).
  • the producers and consumers are numbered 0-31 in row priority order, so that each core group Consumers with the same number are responsible for the same output bucket, and core groups use atomic fetch and add (implemented using atomic comparison assignment) on the main memory to count the tail position of the bucket. Testing shows that this atomic operation does not introduce significant overhead when the memory access granularity is sufficient.
  • the on-chip sorting module in the above embodiment directly performs message generation and forwarding bucket operations, realizing core acceleration and improving processing efficiency. It is a major innovation in this field from scratch.
  • the collective communication method of the fifth embodiment is not limited to graph calculation using the 1.5-dimensional graph partitioning method of the first embodiment, but can be applied to other situations.
  • the message update part uses two-stage sorting to implement random updates of local vertices:
  • each core group processes messages from one bucket at a time, and delivers the message groups to each consumer core through on-chip sorting.
  • the kernel is responsible for different vertex ranges, and then performs mutually exclusive updates to the vertices.
  • a distributed parallel computing system is provided with multiple super nodes, each super node is composed of multiple nodes, each node is a computing device with computing capabilities, and inter-node communication within the super node It is faster than the communication speed between nodes across super nodes.
  • the nodes are divided into grids, with one node in each grid.
  • the internal nodes of a super node are logically arranged in a row.
  • the distributed parallel computing system stores graphs and Perform a graph computation: Obtain data for a graph to be computed, the graph including a plurality of vertices and edges, where each vertex represents a corresponding operation, where each edge connects a corresponding first vertex to a corresponding second vertex, the corresponding second vertex The operation represented receives as input the output of the operation represented by the corresponding first vertex.
  • the edge X ⁇ Y represents the edge from vertex Number class and the third ordinary class, in which the vertices with degree greater than the first threshold are marked as E and are divided into the first extremely high degree number class, and the vertices with degree between the first threshold and the second threshold are marked as H and are divided into In the second high degree category, the vertices with degrees lower than the second threshold are marked as R and are divided into the third general category.
  • the first threshold is greater than the second threshold; for directed graphs, they are divided according to in-degree and out-degree respectively.
  • the degree partition set and the out-degree partition set are marked Xi and Xo respectively, where , the Hi vertex state is maintained synchronously on the columns and rows, and the Eo and Ei vertex states are maintained globally and synchronously. Eo and Ho are collectively called EHo, and Ei and Hi are collectively called EHi.
  • a distributed parallel computing system a computer-readable medium having instructions stored thereon, the instructions, when executed by the distributed parallel computing system, perform the following operations: obtain the data to be calculated Data of a graph, the graph includes a plurality of vertices and edges, wherein each vertex represents a corresponding operation, wherein each edge connects a corresponding first vertex to a corresponding second vertex, and the operation represented by the corresponding second vertex is received by the The output of the operation represented by the corresponding first vertex is used as input.
  • the edge where the vertices with degrees greater than the first threshold are marked as E and are divided into the first extreme height class, and the vertices with degrees between the first threshold and the second threshold are marked as H and are divided into the second height class. Vertices with degrees lower than the second threshold are marked R and are divided into the third general category.
  • the first threshold is greater than the second threshold; for directed graphs, they are divided according to in-degree and out-degree respectively, and the in-degree divides the set and the out-degree divides
  • the sets are marked Xi and Xo respectively, where The communication speed between nodes is fast.
  • the nodes are divided into grids, with one node in each grid.
  • the internal nodes of a super node are logically arranged in a row.
  • the vertices are evenly divided into each node according to the number. Ri and Ro are maintained by the node to which they belong.
  • the Ho vertex state is maintained synchronously on the columns
  • the Hi vertex state is maintained synchronously on the columns and rows
  • the Eo and Ei vertex states are globally synchronously maintained. Eo and Ho are collectively called EHo
  • Ei and Hi are collectively called EHi.
  • vertices are divided into three categories according to degree, but they can be divided into more categories as needed, such as four, five or more categories, and each category can be further subdivided.
  • the graph calculation method is performed by a supercomputer.
  • this is only an example and not a limitation. It is clear to those skilled in the art that some methods can also be performed by a smaller cluster, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

提供基于分布式并行计算的图计算方法、分布式并行计算系统和计算机可读介质。图计算方法包括:获得要计算的图的数据,图包括多个顶点和边;按照压缩稀疏行格式来存储图,存储为稀疏矩阵形式,其中,同一个顶点的所有邻边连续存储,辅以偏移量数组来支撑其索引功能;对于源顶点和目标顶点的度数均大于预定阈值的子图,正向图中按目标顶点即矩阵的列进行分段、反向图中按源顶点即矩阵的行进行分段,将所述子图进一步分为多个分段子图SSG分量,从而使得每个分段子图SSG中目标顶点限制在预定范围内。本实施例的针对高度数顶点的分段子图方法避免或减轻了传统的由于单一大图的顶点范围过大导致访问邻居顶点的数据的时空局部性非常差的问题。

Description

基于核间存储访问的子图分段优化方法及应用 技术领域
本发明总体地涉及三类顶点度数感知的1.5维度图划分方法及应用,更具体地涉及大规模(large-scale)图计算方法、分布式并行计算系统和计算机可读介质。
背景技术
图计算框架是用于支持图计算应用的一类通用编程框架。在中国新一代神威超级计算机上,提供了新一代的“神图”超大规模图计算框架,以支持最大整机规模、数十万亿顶点、三百万亿边的大规模图计算应用。
图计算应用是一类数据密集型应用,它们依托于由顶点和连接两个顶点的边所组成的数据进行计算。典型的应用包括用于网页重要性排序的PageRank、用于图遍历的宽度优先搜索(BFS)、用于图的弱联通分量(WCC)求解的标签传播、单源最短路径(SSSP)等等。
通用图计算领域中,近年来的前沿工作主要集中在单机或较小规模的集群上。单机内存上的工作包括Ligra(非专利文献[1])及其研究组后续的GraphIt非专利文献[2],分别引入了一套简化的顶点中心表示、一个图计算领域专用语言,采用了NUMA-aware、分段子图等多种内存优化;分布式内存上的工作包括本研究组的Gemini非专利文献[3],以及稍晚后国际同行提出的Gluon(也即D-Galois)非专利文献[4],从不同角度解决了数个到数百节点的小规模集群上的图计算问题的并行处理。此外,外存上的工作较为优秀的有Mosaic非专利文献[5],使用了单台具备多块SSD的节点完成了十亿边图的处理。
大规模图数据需要被划分到分布式图计算系统的各个计算节点上。通常存在两类划分方法:一维划分方法和二维划分方法。一维划分方法是以点为中心的划分方法,按照此方法,数据图中的顶点被均匀地划分给不同的机器,每个顶点与它所有的邻边都存储在一起,这样重度顶点(出度或入度高的顶 点)会在很多节点设置代理。二维划分方法是基于边的划分方法,不同于一维划分,二维划分将图中的边(而不是点)均分给各个计算节点从而达到负载均衡的目的,二维划分相当于在顶点的所在的行和列上部署代理。对于大规模图计算,图数据负载严重不均衡,表现在不同顶点的边严重不均衡,不同顶点的度差异非常大,这时一维划分和二维划分都会面临可扩展性问题,一维顶点划分方法会使得太多的重度顶点要部署近乎全局代理,二维顶点划分方法使得太多的顶点要在行和列上部署代理。
在中国,超级计算机广泛采用异构众核结构,在大规模的计算集群中,每个计算节点具备不同架构的大量计算核心。负载不均衡的图计算对于国产异构众核结构也提出了较大的挑战,需要从计算、存储、通信性能角度来优化国产异构众核系统上的图计算。
上面参考文献[1]——[5]具体信息如下:
[1]Julian Shun and Guy E Blelloch.2013.Ligra:a lightweight graph processing framework for shared memory.In Proceedings of the 18th ACM SIGPLAN symposium on Principles and practice of parallel programming.135–146.
[2]Yunming Zhang,Mengjiao Yang,Riyadh Baghdadi,Shoaib Kamil,Julian Shun,and Saman Amarasinghe.2018.Graphit:A highperformance graph dsl.Proceedings of the ACM on Programming Languages 2,OOPSLA(2018),1–30.
[3]Xiaowei Zhu,Wenguang Chen,Weimin Zheng,and Xiaosong Ma.2016.Gemini:A computation-centric distributed graph processing system.In 12th USENIX symposium on operating systems design and implementation(OSDI 16).301–316.
[4]Roshan Dathathri,Gurbinder Gill,Loc Hoang,Hoang-Vu Dang,Alex Brooks,Nikoli Dryden,Marc Snir,and Keshav Pingali.2018.Gluon:a communication-optimizing substrate for distributed heterogeneous graph analytics.SIGPLAN Not.53,4(April 2018),752–768.
[5]Steffen Maass,Changwoo Min,Sanidhya Kashyap,Woonhak Kang,Mohan Kumar,and Taesoo Kim.2017.Mosaic:Processing a Trillion-Edge Graph  on a Single Machine.In Proceedings of the Twelfth European Conference on Computer Systems(EuroSys'17).Association for Computing Machinery,New York,NY,USA,527–543.
发明内容
鉴于上述情况,提出了本发明。
根据本发明的一个方面,提供了一种基于分布式并行计算的图计算方法,包括:获得要计算的图的数据,图包括多个顶点和边,其中每个顶点表示相应运算,其中每个边将相应第一顶点连接至相应第二顶点,所述相应第二顶点表示的运算接收由所述相应第一顶点所表示的运算的输出作为输入,边X→Y表示从顶点X到顶点Y的边,按照压缩稀疏行格式来存储图,存储为稀疏矩阵形式,其中,同一个顶点的所有邻边连续存储,辅以偏移量数组来支撑其索引功能;对于源顶点和目标顶点的度数均大于预定阈值的子图,正向图中按目标顶点即矩阵的列进行分段、反向图中按源顶点即矩阵的行进行分段,将一个子图进一步分为多个分段子图SSG分量,从而使得每个分段子图SSG中目标顶点限制在预定范围内。
优选地,图计算方法在通用多核处理器上进行,所述预定范围是能够存储在末级高速缓存LLC(Last Level Cache)的大小。
优选地,图计算在异构众核处理器上进行拉取模式的边处理,使用本地数据存储器LDM(Local Data Memory)的远程访问机制,将反向图中一个分段子图SSG的源顶点切分存储在一个从核阵列的LDM上,然后在遍历图的过程中,通过从核访问RLD(Remote Load)自目标顶点的数据所属从核获取数据。
优选地,图计算方法中,对所述度数超于预定阈值的源顶点按照如下规则进行编号:从高位到低位为:分段编号;缓存行编号;从核编号,从核本地编号。
优选地,缓存行编号的位数根据顶点数据元素大小总和计算,以确保本地数据存储器LDM能够全部存放下;此后六位用于区分所属的从核编号;从核本地编号的位数根据顶点数据元素大小最小值计算,以确保DMA效率。
优选地,图计算方法中,每个从核根据从核编号位,首先使用带跨步(stride)的直接内存访问DMA操作预取(prefetch)分段内的顶点数据; 在拉取模式的边处理过程中,读取源顶点时使用本地数据存储器远程加载(LDM RLD)读取源顶点所属从核的LDM相应地址。
根据本发明另一方面,提供了一种分布式并行计算系统,具有多个超节点,每个超节点由多个节点组成,每个节点是具有计算能力的计算设备,超节点内部的节点间通信比跨越超节点的节点间的通信速度快,节点被划分为网格,每个格子一个节点,一个超节点的内部节点逻辑上排列为一行,所述分布式并行计算系统按照如下规则存储图和执行图计算:获得要计算的图的数据,图包括多个顶点和边,其中每个顶点表示相应运算,其中每个边将相应第一顶点连接至相应第二顶点,所述相应第二顶点表示的运算接收由所述相应第一顶点所表示的运算的输出作为输入,边X→Y表示从顶点X到顶点Y的边,按照压缩稀疏行格式来存储图,存储为稀疏矩阵形式,其中,同一个顶点的所有邻边连续存储,辅以偏移量数组来支撑其索引功能,对于源顶点和目标顶点的度数均大于预定阈值的子图,正向图中按目标顶点即矩阵的列进行分段、反向图中按源顶点即矩阵的行进行分段,将一个子图进一步分为多个分段子图SSG分量,从而使得每个分段子图SSG中目标顶点限制在预定范围内。
根据本发明另一方面,提供了一种计算机可读介质,其上存储有指令,所述指令在由分布式并行计算系统执行时,执行前述图计算方法。
通过本实施例的针对高度数顶点的分段子图方法对图(稀疏矩阵)按目标顶点(矩阵的列)进行分段,将一个图分为多个SSG分量,从而使得每个SSG中目标顶点限制在一定范围内,避免或减轻了传统的由于单一大图的顶点范围过大导致访问邻居顶点的数据的时空局部性非常差的问题。
附图说明
图1示出了根据本发明实施例的具备超节点的能够被应用来实现本发明的图数据计算的超计算机神威架构示意图。
图2示出了根据本发明实施例的图划分策略的示意图。
图3示出了在本发明实施方式的1.5维划分中,整机规模下的子图边数分布示意图。
图4示出了传统的图以稀疏矩阵形式存储(图4的左侧部分)和按照本 发明实施例的分段子图存储优化(图4中的右侧部分)的示意图。
图5示出了根据本发明实施例的归约分发类型通信的处理示意图。
图6示意性地示出根据本发明实施例的对从核分类、核组内数据分发、写外部桶的过程的流程图。
具体实施方式
给出本文中使用的一些术语解释。
节点:每个节点是具有计算能力的计算设备。
超节点:由预定数目个节点组成超节点,超节点内部的节点间通信在物理上比跨越超节点的节点间的通信速度快。
维护顶点状态:在需要进行相关的边处理时,若关联的顶点不属于本机,则从顶点的所属节点通过网络通信同步顶点状态数据以建立代理(delegate),从而使得边的更新操作可以直接操作本地的代理而无需操作远程的顶点原本。
顶点的代理(delegate):当关联的顶点不属于本机时,通过通信建立的、负责边的本地更新的副本,称为顶点的代理(delegate)。一个代理可以用于代理出边和代理入边:代理出边时,源顶点通过通信将当前状态同步至其代理;代理入边时,代理首先将部分边消息在本地合并,再将合并后的消息通过通信发送至目标顶点。
从核:异构众核架构中的计算处理单元(CPE)。
作为示例而非限制,本文中的图数据例如是社交网络、网页数据、知识图谱这种,以数据形式记录、存在和处理的图。图的规模可以例如达到数十万亿顶点、数百万亿边。
大规模图数据需要被划分到分布式图计算系统的各个计算节点上。
图1示出了根据本发明实施例的具备超节点的能够被应用来实现本发明的图数据计算的超计算机神威架构示意图。图中,MPE表示管理处理单元(简称主核),NOC表示片上网络,CPE表示计算处理单元(简称从核),LDM表示本地数据存储,NIC表示互连网卡,LDM表示本地数据存储器(Local Data Memory),MC表示存储器控制器(memory controller)。
图包括多个顶点和边,其中每个顶点表示相应运算,其中每个边将相应第一顶点连接至相应第二顶点,所述相应第二顶点表示的运算接收由所述相应第一顶点所表示的运算的输出作为输入,边X→Y表示从顶点X到顶点Y的边。
如图所示,裁剪胖树(oversubscribed fat tree)是一种互连网络的拓扑结构,各个超节点通过裁剪胖树的中央交换网络互相连接、实现通信,各超节点内的处理器则通过局部的快速网络实现通信;与之相对地,跨越超节点的通信需要通过裁剪胖树,因此带宽较低、延迟较高。
I.第一实施方案:基于3级别度数的1.5维图划分
根据本发明一个实施例,提供了一种基于三类度数顶点的混合维度划分方法,对顶点集合按度数分为极高度数(E for Extreme,例如度数大于总节点数)、高度数(H for High,例如度数大于超节点内的节点数)、常规顶点(R for Regular)三类,有时R类顶点也被称为L类顶点(L for Low,即度数相对低的顶点);对有向图,则按入度和出度分别划分,入度划分集合和出度划分集合分别标记为Xi、Xo,其中X为E、H或R。在本实施例中,同时,由预定数目个节点组成超节点,超节点内部的节点间通信比跨越超节点的节点间的通信速度快,将节点划分为网格,每个格子一个节点,一个超节点的内部节点逻辑上排列为一行,或者说一行被映射到超节点,顶点按编号均匀划分到各节点,Ri、Ro由所属节点维护,在列上同步维护Ho顶点状态,在列上和行上同步维护Hi顶点状态,全局同步维护Eo、Ei顶点状态。在本文中,为描述方便,Eo、Ho合称为EHo,表示Eo、Ho,Ei、Hi合称为EHi,表示Ei、Hi。
关于节点归类,将度数大于第一阈值的顶点标记为E,划分为极高度数类,将度数处于第一阈值和第二阈值之间的顶点标记为H,划分为高度数类,将度数低于第二阈值的顶点标记为R,划分为普通类,第一阈值大于第二阈值。这里的阈值可以根据需要设置,例如在一个示例中,第一阈值为总节点数,也就是说度数大于总节点数的顶点归于极高度数类,第二阈值为超节点内的节点数,也就是说,度数大于超节点内的节点数而小于总节点数的顶点归为高度数类,而度数小于超节点内的节点数的顶点为普通类。在另一个示例中,第一阈值为总节点数乘以一个常数系数,第二阈值为超节点内的节点 数乘以第二常数系数。
不同的划分策略可以视为不同的代理策略。
为Eo、Ei顶点在全部节点上设置代理。这些极高度数顶点应当连接有大量的、另一端覆盖几乎全部节点的边。针对E顶点,在所有节点上创建代理有助于减少通信。
为Ho顶点在列上布置代理,且沿着行来传出消息;为Hi顶点在行上和列上布置代理,且根据边所属的子图不同,选择在行上或列上的代理合并消息后发送至目标顶点。在遍历过程中,具有中等度数的顶点一般连接着数量较多、因而另一端覆盖了所有超节点的边。H顶点明显多于E顶点,如果像E顶点那样为H顶点布置全局代理,会导致大量不必要的通信。考虑到分层网络拓扑,不创建代理会导致数据重复跨越超级节点通信,即同样数据不停地送往或者流出超节点内的不同节点,从而造成瓶颈。将这些H顶点在行和列上布置代理。有助于消除消息向同一个超级节点的重复发送,同时避免代价高昂的全局代理。
对于R顶点不设置代理。作为具有足够低度数的顶点,为它们布置代理几乎没有收益,却需要时间和空间来管理代理。因此不为R顶点设置代理,而是采用远程边来与其他顶点连接。当要访问这些边时,需要每条边地进行消息传递:Ro顶点要更新邻居顶点时,将自身编号和边消息通过网络通信一同发送至目标顶点或其入边代理以实现更新;Ri顶点要接受邻居顶点的更新要求时,邻居(源)顶点从自身或通过出边代理将源编号和边消息通过网络通信发送至Ri顶点以实现更新。
图2示出了根据本发明实施例的图划分策略的示意图。
在一个优选实施例中,如图2所示,按照边的情况进行顶点对应代理的分配和存储。图2以及其他图中的X2Y形式的写法,表示从源顶点X到目标顶点Y的边,2表示英文to。
对于边EHo→EHi,将这类边存储在源顶点所在节点所在的网格列、目标顶点所在节点所在的网格行的节点上,即对高度数顶点内部的子图采用二维划分,最小化通信量。例如图2中所示的节点01中所示的那样。
对于边Eo→Ri,将这类边存储在R顶点所在节点上,以允许计算时全 局维护极高度数顶点,首先将顶点状态发送至Eo顶点的代理,再通过代理更新本机上的Ri顶点,减少需要通信的消息量。例如图2的节点11中所示的边E->L2。
对于边Ho→Ri,将这类边存储在H顶点所在节点所在的网格列、R顶点所在节点所在的网格行的节点上,以允许计算时在列上维护高度数顶点,在跨超节点的通信中聚合其通信消息,当对于Ho顶点,消息需要发送到同一个超节点的多个目标顶点时,因为源顶点在自己所在列、目标所在行的位置有一个可用的代理节点,所以先将一条消息跨超节点地发送到该可用的代理节点,然后由该可用的代理节点通过超节点内的快速网络发送给所有的目标顶点。
对于边Ro→Ei,将这类边存储在R顶点所在节点上,以允许计算时全局维护极高度数顶点,聚合其通信消息;例如图2中的节点11。
对于边Ro→Hi,将这类边存储在R顶点所在节点,但H顶点按所在列上的编号存储,以允许计算时在R所在行、H所在列的节点上累加其更新消息,聚合跨超节点的网络通信;例如图2中节点点01中的L1-H的通信。
对于边Ro→Ri,将这类边在源、目标顶点处分别存储正向和反向边,边消息通过在通信网格上进行行列转发实现。例如图2中的节点00和11中的L1和L2。
上述三类顶点和六类边划分能够有效解决幂律分布带来的强烈的负载偏斜:高度数顶点所关联的边,无论出入,都会视情况按照二维划分或对侧的一维划分存放,在全局平均地保持了边的数量,进而保证了存储和负载的双均衡。这样的划分能够解决极端图中特大顶点的问题。
经过上述分类,对H顶点集合的处理相比单纯的一维划分的高度数顶点处理高效许多。例如,原本需要每条边发送消息的Ho顶点,会向同一超节点中的多个节点发送消息,从而挤兑顶层网络带宽;经此优化,消息只会向每个超节点发送一次,从而节省了顶层网络。同时,由于只向目标超节点中的一个代理节点发送一次,也不会引入大量的无用通信和额外存储。
作为一个应用示例,在本发明实施方式的1.5维划分中,整机规模下的子图边数分布如图3所示(按累计分布函数绘制),极差(即最大值与最小值之相对偏差)在EH2EH中不超过5%,其它子图中仅为0.35%。
作为实现示例,从所有顶点中选择出来E和顶点,并按照度数对每个节点进行排序。其余顶点为L顶点,保留原始顶点ID。我们进一将原始边集合拆分为6个部分。由于E和H在列和行上有代理,因此,两端为E或H(有时候,本文称之为EH2EH,或者EH->EH)的子图是2D划分的。从E到L(E2L)和从L到E(L2E)的两个子图都附加了L的所有者,就像1D划分中的重度顶点(heavy vertex)与重代理(heavy delegate)一样。同名,我们将H2L分布在H的所有者的列上,限制行内的消息传递。L2H仅存储在L的所有者身上,与H2L相反。最后,L2L是最简单的组件,就像在原始的1D划分中一样。生成的划分可以如图2所示。每个子图在节点之间都很好地平衡,即使是在全尺度下也是如此。将在后文介绍这种划分方法的平衡。
|H|=0时,即不存在高度数顶点,只有极高度数顶点和普通顶点,本发明实施例的方法退化为类似于具有密集代理(heavy delegate)的一维分区,不同在于本发明中重量级顶点(heavy vertex)之间的边是二维分区的。与具有密集代表的一维分区相比,本发明的方法将高度数顶点进一步隔离为两个级别,从而产生拓扑感知的数据分区。它保留了足够的重顶点,以便在方向优化中更好地提前退出。它还通过仅在行和列上为H设置代理来避免全局共享所有这些重顶点的通信。
当|L|=0时,即当只有极高度数顶点和高度数定点时,它退化为具有顶点重新排序的二维划分。与2D划分相比,我们的方法避免了低度(L)顶点的低效代理,消除了2D划分的空间限制。此外,本发明实施例还为超重(E)顶点构造全局代理,从而减少通信。
II.第二实施方案:子迭代自适应方向选择
根据本发明的一个实施例,还提出了支持快速退出的子迭代自适应方向选择。
在许多图算法中,图的遍历方向,即从源顶点“推送”至目标顶点还是从目标顶点“拉取”源顶点,会极大影响性能。例如:
1.BFS(宽度优先搜索)中,较宽的中间迭代如果使用“推送”模式,会使得大量顶点被重复激活,较窄的头尾迭代如果使用“拉取”模式,会 由于活跃顶点比例很低导致许多无用计算,需要在两个方向间自动切换选择;
2.PageRank中,本地遍历图并归约的子图应当使用“拉取”模式来取得最优的计算性能(“拉取”是随机读,“推送”是随机写),而涉及远程边消息的子图应当使用“推送”模式以最小化通信量。
针对此情况,提出了子迭代自适应方向选择的实施例。具体地,实现“推送”和“拉取”两种遍历模式,且支持根据活跃顶点比例自动选择遍历模式,更具体地,对于EHo→EHi、Eo→Ri、Ro→Ei三类本地计算的边,根据目标顶点的活跃比例是否超过可配置的阈值选择“拉取”或“推送”模式;对于Ho→Ri、Ro→Hi、Ro→Ri三类本地计算的边,根据目标顶点的活跃比例与源顶点的活跃比例之比值是否超过可配置的阈值选择“拉取”或“推送”模式。
在进行宽度优先搜索中,判断迭代处于头尾还是中间,当判断迭代处于头尾时,采用从目标顶点“拉取”到源顶点的方式来进行图遍历;当判断迭代处于中间时,采用源顶点“推送”至目标顶点的方式来进行图遍历;以及
在PageRank中,本地遍历图并归约的子图使用“拉取”模式来取得最优的计算性能,而涉及远程边消息的子图使用“推送”模式以最小化通信量。
在拉取的方式下,设置顶点的状态标记,用于判断一个顶点的状态是否已经抵达了死状态,在死状态,顶点不会响应更多的消息;在确认达到死状态后,跳过剩余消息的处理。
例如通过下面的代码提供快速退出的接口来支持遍历类算法的性能优化:用户可以通过传入参数fcond,判断一个消息是否还会被更新,以实现剩余消息的跳过。
template<typename Msg>void edge_map(
auto&vset_out,auto&vset_in,
std::initializer_list<vertex_data_base*>src_data,
std::initializer_list<vertex_data_base*>dst_data,
auto fmsg,auto faggr,auto fapply,
auto fcond,Msg zero);
通过上述支持快速退出的子迭代自适应方向选择的实施方案,能够自适应切换遍历方向,避免无用计算,最小化通信量,优化图计算性能。
III.第三实施方案:对EH二维划分子图的分段子图数据结构优化
在幂律分布图中,由模拟发现,EHo→EHi这一子图的边数将占到全图的60%以上,是最重要的计算优化目标。对这一子图,提出引入分段子图(Segmented Sub-Graph,SSG)数据结构进行优化的实施方案。
压缩稀疏行(Compressed Sparse Row,CSR)格式是最常见的图和稀疏矩阵的存储格式。在这一格式当中,同一个顶点的所有邻边连续存储,辅以偏移量数组来支撑其索引功能。由于单一大图的顶点范围过大,访问邻居顶点的数据的时空局部性非常差,提出了如下分段子图方法:对源顶点和目标顶点都属于高度数的子图(稀疏矩阵),正向图按目标顶点(矩阵的列)进行分段,反向图中按源顶点(矩阵的行)进行分段(这里正向图指从源顶点到目标顶点的有向图,反向图指从目标顶点到源顶点的图),将一个图分为多个SSG分量,从而使得每个SSG中目标顶点限制在一定范围内。作为示例,在通常的通用处理器中,这一范围一般选择能够存储在末级高速缓存(LLC,Last Level Cache)的大小。图4示出了传统的图以稀疏矩阵形式存储(图的左侧部分)和按照本发明实施例的分段子图存储优化(图4中的右侧部分)。
在一个示例中,针对中国国产异构众核处理器的体系结构特征,选择使用本地数据存储器LDM(Local Data Memory)的远程访问机制来替换LLC上的快速随机目标访问。首先,将一个SSG的目标顶点切分存储在一个从核阵列的LDM上,然后在遍历图的过程中,通过从核访问RLD(Remote Load)从所属核心(也即目标顶点的数据所属从核)获取数据。结合高度数顶点数量较少的特性,即使在很大的规模上,依然可以通过较少数量的分段子图就能够将EHo→EHi子图处理中的随机访问全部通过LDM来实现,而无需使用主存上的离散访问(GLD、GST)。
在一个示例中,对所述第一极高度数类、第二高度数类别的目标顶点按照如下规则进行编号:从高位到低位为分段编号、缓存行编号、从核编号和从核本地编号。在一个具体示例中,缓存行编号的位数根据顶点数据元素大 小总和计算,以确保本地数据存储器LDM能够全部存放下;此后六位用于区分所属的从核编号(例如,共2 6=64个从核);从核本地编号的位数根据顶点数据元素大小最小值计算,以确保DMA效率。更具体,每个从核可以根据从核编号位,首先使用带跨步(stride)的DMA操作预取分段内的顶点数据;在拉取模式的边处理过程中,读取读取第一极高度数类、第二高度数类的目标顶点EHi时使用本地数据存储器远程加载LDM RLD读取源顶点所属从核的LDM相应地址。
通过本实施例的针对EH顶点的分段子图方法对图(稀疏矩阵)按目标顶点(矩阵的列)进行分段,将一个图分为多个SSG分量,从而使得每个SSG中目标顶点限制在一定范围内,避免或减轻了传统的由于单一大图的顶点范围过大导致访问邻居顶点的数据的时空局部性非常差的问题。
第三实施方案的分段子图方法并不局限于第一实施方案的1.5维度图划分方法,而是可以应用于依据任何其他标准划分的作为相对高度数顶点的源顶点和目标顶点构成的子图。
IV.第四实施方案:高度数顶点的集合通信优化
在顶点中心的图计算编程模型中,顶点数据的归约是用户提供的自定义函数,无法映射至MPI的归约算子。高度数顶点数据的归约过程和它们所需的计算有着较大的循环合并空间、涉及的访存量也较为显著,本实施例使用从核组对这一通信过程进行优化,优化过程可以内嵌在EH子图的处理中进行。
具体地,实施例中,对于第一极高度数类、第二高度数类的目标顶点EHi的归约分发(Reduce Scatter)类型通信,进行如下优化处理:对于行、列上的归约,使用环式算法;对于全局的归约,首先在本地对消息进行转置,使数据由行优先变为列优先,然后先调用行上的归约分发,再调用列上的归约分发,以缩减列上的跨超节点通信。图5示出了根据本发明实施例的归约分发类型通信的处理示意图。
通过上述集合通信优化,能够缩减列上的跨超节点通信。分级的归约分发在第一步行内的归约中消除了超节点内的冗余数据,使得第二步中跨超节点的通信是最小的、无冗余的。
第四实施方案的集合通信方法并不局限于使用了第一实施方案的1.5维 度图划分方法的图计算,而是可以应用于依据任何其他标准划分的作为相对高度数顶点的源顶点和目标顶点构成的子图。
V.第五实施方案:基于RMA的片上排序核心
在消息通信中,由于生成的消息是无序的,需要首先按照目标节点进行分桶,才能进行Alltoallv(变长全局到全局)的通信。此外,消息生成部分也需要一个分桶步骤来将消息分给目标节点。因此,需要一个通用的核心模块,对消息进行桶排序。
桶排序中,如果简单地在从核间进行并行,一方面会带来多个从核写同一个桶的冲突、引入原子操作开销,另一方面每个从核照顾太多的桶会缩小LDM缓冲区的大小,降低访存带宽利用效率。结合以上两点考虑,本实施方案基于核组无锁数据分发的思想,在新的众核平台架构上进行了全新的设计,将从核按功能分类、利用RMA(远程内存访问,Remote Memory Access)操作在核组内先进行一轮数据分发,再进行写外部桶的访存。图6示意性地示出根据本发明实施例的对从核分类、核组内数据分发、写外部桶的过程的流程图,图中CPE核组排为8*8的阵列。
在一个示例中,所述过程在异构众核处理器上进行,边消息的处理中,消息生成部分和消息重排部分采用分桶步骤来将消息分给目标节点;
将从核按行分为2类,生产者和消费者,其中生产者从内存获取数据,进行当前阶段的数据处理,如果有生成数据则根据目标地址置入对应于应当处理此消息的消费者的发送缓冲中,缓冲满时通过RMA传递给相应的消费者。消费者接收RMA传递过来的消息,进行后续的需要互斥的操作(例如追加在主存中的通信缓冲区尾部,或更新顶点状态)。
为了让例如神威超计算机架构中一个计算节点的六个核组能够协作进行片上排序,在核组内,按照行优先顺序对生产者和消费者分别编号为0-31,让每个核组的同号消费者负责相同的输出桶,核组间通过主存上的原子取并加(使用原子比较赋值实现)来实现桶尾位置的计数。测试显示这一原子操作在访存粒度足够时不会引入显著的开销。
通过上述实施例的片上排序模块直接进行消息的生成和转发分桶操作,实现了从核加速,提高了处理效率,是本领域此方面实现从无到有的重大创 新。
第五实施方案的集合通信方法并不局限于使用了第一实施方案的1.5维度图划分方法的图计算,而是可以应用于其他情况。
VI.第六实施方案:低度数顶点的二阶段更新
在异构众核处理器上进行,边消息的处理中,消息更新部分使用二阶段排序实现本地顶点的随机更新:
首先对准备进行的所有更新操作进行桶排序,将顶点按编号聚类归入同一桶内,将所有消息排序入各个桶中,保证每个桶内的顶点数量小到顶点信息可以放在一个核组的LDM中,称为粗粒度排序;下一步细粒度排序更新中,从核组中每一个每次处理一个桶的消息,通过片上排序将消息分组传递给各个消费者核,每个消费者核负责不同的顶点区间,然后进行顶点互斥的更新。
通过上述低度数顶点的二阶段更新操作,避免了对大范围的目标顶点的零散随机访问,提高了顶点更新的通量;同时,先后二阶段的更新操作相比组成同时进行的流水线,能够更有效地利用全部从核组,避免流水环节之间的负载不均衡。
上述实施方案还可以结合和/或利用其他方式实施,例如分布式并行计算系统、计算机可读存储介质、各种语言的计算机程序。
根据另一实施方式,提供了一种分布式并行计算系统,具有多个超节点,每个超节点由多个节点组成,每个节点是具有计算能力的计算设备,超节点内部的节点间通信比跨越超节点的节点间的通信速度快,节点被划分为网格,每个格子一个节点,一个超节点的内部节点逻辑上排列为一行,所述分布式并行计算系统按照如下规则存储图和执行图计算:获得要计算的图的数据,图包括多个顶点和边,其中每个顶点表示相应运算,其中每个边将相应第一顶点连接至相应第二顶点,所述相应第二顶点表示的运算接收由所述相应第一顶点所表示的运算的输出作为输入,边X→Y表示从顶点X到顶点Y的边,将顶点按照度数划分为第一极高度数类,第二高度数类和第三普通类,其中将度数大于第一阈值的顶点标记为E,划分为第一极高度数类,将度数处于第一阈值和第二阈值之间的顶点标记为H,划分为第二高度数类,将度数低于第二阈值的顶点标记为R,划分为第三普通类,第一阈值大于第二阈值;对有向 图,则按入度和出度分别划分,入度划分集合和出度划分集合分别标记为Xi、Xo,其中X为E、H或R,将顶点按编号均匀划分到各节点,Ri、Ro由所属节点维护,在列上同步维护Ho顶点状态,在列上和行上同步维护Hi顶点状态,全局同步维护Eo、Ei顶点状态,Eo、Ho合称为EHo,Ei、Hi合称为EHi。
根据另一实施方式,提供了一种分布式并行计算系统,一种计算机可读介质,其上存储有指令,所述指令在由分布式并行计算系统执行时,执行下述操作:获得要计算的图的数据,图包括多个顶点和边,其中每个顶点表示相应运算,其中每个边将相应第一顶点连接至相应第二顶点,所述相应第二顶点表示的运算接收由所述相应第一顶点所表示的运算的输出作为输入,边X→Y表示从顶点X到顶点Y的边,将顶点按照度数划分为第一极高度数类,第二高度数类和第三普通类,其中将度数大于第一阈值的顶点标记为E,划分为第一极高度数类,将度数处于第一阈值和第二阈值之间的顶点标记为H,划分为第二高度数类,将度数低于第二阈值的顶点标记为R,划分为第三普通类,第一阈值大于第二阈值;对有向图,则按入度和出度分别划分,入度划分集合和出度划分集合分别标记为Xi、Xo,其中X为E、H或R,同时,由预定数目个节点组成超节点,每个节点是具有计算能力的计算设备,超节点内部的节点间通信比跨越超节点的节点间的通信速度快,将节点划分为网格,每个格子一个节点,一个超节点的内部节点逻辑上排列为一行,顶点按编号均匀划分到各节点,Ri、Ro由所属节点维护,在列上同步维护Ho顶点状态,在列上和行上同步维护Hi顶点状态,全局同步维护Eo、Ei顶点状态,Eo、Ho合称为EHo,Ei、Hi合称为EHi。
前文的叙述中,顶点按照度数划分为三类,不过可以根据需要划分为更多类,例如四类、五类或更多,也可以对于每个类别进一步细分。
前文的叙述中,作为示例,描述了由超级计算机执行图计算方法,不过此仅为示例而非作为限制,本领域技术人员清楚,有些方法也可以例如由规模更小的集群来执行。
虽然本说明书包含许多具体实施细节,但这些不应被解释为对任何发明或可能要求保护的内容的范围加以限制,而应被理解为可能针对特定发明的特定实施例的特征的描述。本说明书中在单独实施例的上下文中描述的某些特征也能够在单个实施例中组合实现。反之,在单个实施例的上下文中描述 的各种特征也能够在多个实施例中单独实现或者以任何适当的子组合实现。此外,尽管在上文可以将特征描述为以某些组合进行动作乃至最初如此要求保护特征,但来自要求保护的组合的一个或多个特征在一些情形下能够从所述组合中排除,并且所要求保护的组合可以涉及子组合或者子组合的变型。
类似地,虽然在图中以特定次序来描绘操作,但这不应被理解为要求以所示的特定次序或者以顺序来执行这样的操作或者执行所有图示的操作来获得期望的结果。在某些情况下,多任务以及并行处理可能是有利的。此外,上述实施例中的各种系统模块和组件的分离不应被理解为在所有实施例中皆要求这样的分离,而应理解,所述的程序组件和系统一般能够共同整合到单个软件产品中或者封装到多个软件产品中。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。因此,本发明的保护范围应该以权利要求的保护范围为准。

Claims (8)

  1. 一种基于分布式并行计算的图计算方法,包括:
    获得要计算的图的数据,图包括多个顶点和边,其中每个顶点表示相应运算,其中每个边将相应第一顶点连接至相应第二顶点,所述相应第二顶点表示的运算接收由所述相应第一顶点所表示的运算的输出作为输入,边X→Y表示从顶点X到顶点Y的边,
    按照压缩稀疏行格式来存储图,存储为稀疏矩阵形式,其中,同一个顶点的所有邻边连续存储,辅以偏移量数组来支撑其索引功能;
    对于源顶点和目标顶点的度数均大于预定阈值的子图,正向图中按目标顶点即矩阵的列进行分段、反向图中按源顶点即矩阵的行进行分段,将一个子图进一步分为多个分段子图SSG分量,从而使得每个分段子图SSG中目标顶点限制在预定范围内。
  2. 如权利要求1所述的图计算方法,在通用多核处理器上进行所述方法,所述预定范围是能够存储在末级高速缓存LLC(Last Level Cache)的大小。
  3. 如权利要求1所述的超大规模图计算方法,所述超大规模图计算在异构众核处理器上进行拉取模式的边处理,使用本地数据存储器LDM(Local Data Memory)的远程访问机制,将反向图中一个分段子图SSG的源顶点切分存储在一个从核阵列的LDM上,然后在遍历图的过程中,通过从核访问RLD(Remote Load)自目标顶点的数据所属从核获取数据。
  4. 如权利要求3所述的图计算方法,对所述度数超于预定阈值的源顶点按照如下规则进行编号:
    从高位到低位为:分段编号;缓存行编号;从核编号,从核本地编号。
  5. 如权利要求4所述的图计算方法,缓存行编号的位数根据顶点数据元素大小总和计算,以确保本地数据存储器LDM能够全部存放下;此后六位用于区分所属的从核编号;从核本地编号的位数根据顶点数据元素大小最小值计算,以确保DMA效率。
  6. 如权利要求5所述的图计算方法,每个从核根据从核编号位,首先使用带跨步(stride)的直接内存访问DMA操作预取(prefetch)分段内的顶点数据;在拉取模式的边处理过程中,读取源顶点时使用本地数据存储器远 程加载(LDM RLD)读取源顶点所属从核的LDM相应地址。
  7. 一种分布式并行计算系统,具有多个超节点,每个超节点由多个节点组成,每个节点是具有计算能力的计算设备,超节点内部的节点间通信比跨越超节点的节点间的通信速度快,节点被划分为网格,每个格子一个节点,一个超节点的内部节点逻辑上排列为一行,所述分布式并行计算系统按照如下规则存储图和执行图计算:
    获得要计算的图的数据,图包括多个顶点和边,其中每个顶点表示相应运算,其中每个边将相应第一顶点连接至相应第二顶点,所述相应第二顶点表示的运算接收由所述相应第一顶点所表示的运算的输出作为输入,边X→Y表示从顶点X到顶点Y的边,
    按照压缩稀疏行格式来存储图,存储为稀疏矩阵形式,其中,同一个顶点的所有邻边连续存储,辅以偏移量数组来支撑其索引功能。
    对于源顶点和目标顶点的度数均大于预定阈值的子图,正向图中按目标顶点即矩阵的列进行分段、反向图中按源顶点即矩阵的行进行分段,将一个子图进一步分为多个分段子图SSG分量,从而使得每个分段子图SSG中目标顶点限制在预定范围内。
  8. 一种计算机可读介质,其上存储有指令,所述指令在由分布式并行计算系统执行时,执行权利要求1到6任一项的方法。
PCT/CN2022/114568 2022-03-31 2022-08-24 基于核间存储访问的子图分段优化方法及应用 WO2023184836A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210346305.2A CN114756483A (zh) 2022-03-31 2022-03-31 基于核间存储访问的子图分段优化方法及应用
CN202210346305.2 2022-03-31

Publications (1)

Publication Number Publication Date
WO2023184836A1 true WO2023184836A1 (zh) 2023-10-05

Family

ID=82328370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114568 WO2023184836A1 (zh) 2022-03-31 2022-08-24 基于核间存储访问的子图分段优化方法及应用

Country Status (2)

Country Link
CN (1) CN114756483A (zh)
WO (1) WO2023184836A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370619A (zh) * 2023-12-04 2024-01-09 支付宝(杭州)信息技术有限公司 图的分片存储和子图采样方法及装置
CN117785480A (zh) * 2024-02-07 2024-03-29 北京壁仞科技开发有限公司 处理器、归约计算方法及电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115525793A (zh) * 2021-06-24 2022-12-27 平头哥(上海)半导体技术有限公司 由计算机实现的方法、系统及存储介质
CN114756483A (zh) * 2022-03-31 2022-07-15 深圳清华大学研究院 基于核间存储访问的子图分段优化方法及应用

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522428A (zh) * 2018-09-17 2019-03-26 华中科技大学 一种基于索引定位的图计算系统的外存访问方法
CN109740023A (zh) * 2019-01-03 2019-05-10 中国人民解放军国防科技大学 基于双向位图的稀疏矩阵压缩存储方法
US20190163704A1 (en) * 2017-02-27 2019-05-30 Oracle International Corporation In-memory graph analytics system that allows memory and performance trade-off between graph mutation and graph traversal
CN113419862A (zh) * 2021-07-02 2021-09-21 北京睿芯高通量科技有限公司 一种面向gpu卡群的图数据划分优化方法
CN114756483A (zh) * 2022-03-31 2022-07-15 深圳清华大学研究院 基于核间存储访问的子图分段优化方法及应用

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190163704A1 (en) * 2017-02-27 2019-05-30 Oracle International Corporation In-memory graph analytics system that allows memory and performance trade-off between graph mutation and graph traversal
CN109522428A (zh) * 2018-09-17 2019-03-26 华中科技大学 一种基于索引定位的图计算系统的外存访问方法
CN109740023A (zh) * 2019-01-03 2019-05-10 中国人民解放军国防科技大学 基于双向位图的稀疏矩阵压缩存储方法
CN113419862A (zh) * 2021-07-02 2021-09-21 北京睿芯高通量科技有限公司 一种面向gpu卡群的图数据划分优化方法
CN114756483A (zh) * 2022-03-31 2022-07-15 深圳清华大学研究院 基于核间存储访问的子图分段优化方法及应用

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117370619A (zh) * 2023-12-04 2024-01-09 支付宝(杭州)信息技术有限公司 图的分片存储和子图采样方法及装置
CN117370619B (zh) * 2023-12-04 2024-02-23 支付宝(杭州)信息技术有限公司 图的分片存储和子图采样方法及装置
CN117785480A (zh) * 2024-02-07 2024-03-29 北京壁仞科技开发有限公司 处理器、归约计算方法及电子设备
CN117785480B (zh) * 2024-02-07 2024-04-26 北京壁仞科技开发有限公司 处理器、归约计算方法及电子设备

Also Published As

Publication number Publication date
CN114756483A (zh) 2022-07-15

Similar Documents

Publication Publication Date Title
WO2023184835A1 (zh) 三类顶点度数感知的1.5维度图划分方法及应用
WO2023184836A1 (zh) 基于核间存储访问的子图分段优化方法及应用
Zhang et al. GraphP: Reducing communication for PIM-based graph processing with efficient data partition
US8117288B2 (en) Optimizing layout of an application on a massively parallel supercomputer
AU2016371481B2 (en) Processing data using dynamic partitioning
Ma et al. Process distance-aware adaptive MPI collective communications
Mehdipour et al. Energy-efficient big data analytics in datacenters
JP2014525640A (ja) 並列処理開発環境の拡張
CN111630505A (zh) 深度学习加速器系统及其方法
WO2024051388A1 (zh) 一种基于禁忌搜索算法的神经网络片上映射方法和装置
WO2023184834A1 (zh) 全局高度数顶点集合通信的优化方法及应用
Aridor et al. Resource allocation and utilization in the Blue Gene/L supercomputer
Chen et al. Rubik: A hierarchical architecture for efficient graph learning
US8995789B2 (en) Efficient collaging of a large image
Mirsadeghi et al. PTRAM: A parallel topology-and routing-aware mapping framework for large-scale HPC systems
US10862755B2 (en) High-performance data repartitioning for cloud-scale clusters
Kang et al. Large scale complex network analysis using the hybrid combination of a MapReduce cluster and a highly multithreaded system
US5517654A (en) System for parallel implementation of combinatorial optimization in a multiprocessor network for generating search graphs for solving enumerative problems
Faraji et al. Exploiting heterogeneity of communication channels for efficient GPU selection on multi-GPU nodes
CN114880271A (zh) 基于片上通信机制的片上排序方法及应用
Sambhus et al. Reuse-aware partitioning of dataflow graphs on a tightly-coupled CGRA
Alotaibi Topology-Aware Mapping Techniques for Heterogeneous HPC Systems: A Systematic Survey
Li et al. GraphRing: an HMC-ring based graph processing framework with optimized data movement
Sze et al. Designing DNN Accelerators
Li et al. Concurrent hybrid breadth-first-search on distributed powergraph for skewed graphs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934656

Country of ref document: EP

Kind code of ref document: A1