WO2023066198A1 - 分布式数据处理 - Google Patents

分布式数据处理 Download PDF

Info

Publication number
WO2023066198A1
WO2023066198A1 PCT/CN2022/125675 CN2022125675W WO2023066198A1 WO 2023066198 A1 WO2023066198 A1 WO 2023066198A1 CN 2022125675 W CN2022125675 W CN 2022125675W WO 2023066198 A1 WO2023066198 A1 WO 2023066198A1
Authority
WO
WIPO (PCT)
Prior art keywords
vertex
target
data
active
distributed node
Prior art date
Application number
PCT/CN2022/125675
Other languages
English (en)
French (fr)
Inventor
覃伟
于纪平
朱晓伟
陈文光
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2023066198A1 publication Critical patent/WO2023066198A1/zh
Priority to US18/544,666 priority Critical patent/US20240134881A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/278Data partitioning, e.g. horizontal or vertical partitioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/273Asynchronous replication or reconciliation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Definitions

  • This document relates to the technical field of data processing, and in particular to a distributed data processing method, device and equipment.
  • the data processing method in a single machine does not involve network communication and data interaction, it has the advantages of simple and convenient data processing and high performance. Based on this advantage, the data processing method in a single machine has also become the current mainstream data processing method.
  • the data scale of all walks of life is increasing rapidly, and data processing requires more resources.
  • the current mainstream single-machine data processing methods cannot meet the current processing needs of big data due to limited resources. .
  • An embodiment of this specification provides a distributed data processing method.
  • the method includes: determining a set of active vertices currently participating in data processing in the target graph data, wherein the target graph data is generated in advance based on event information of a plurality of associated target events, and the event information includes a plurality of corresponding target events An event element, each vertex of the target graph data corresponds to one of the event elements, and each edge of the target graph data connects the vertices with an association relationship; if the external memory of the first distributed node stores a Any active vertex in the active vertex set, then determine the target data processing mode that matches the active vertex set in the preset multiple data processing modes; according to the target data processing mode, determine any active vertex The vertex to be updated with the association relationship; according to the first data of any active vertex in the external memory, send a first update message to the target distributed node where the vertex to be updated is located, so that the target The distributed node updates the second data of the vertex to be updated in its external memory according
  • An embodiment of this specification provides a distributed data processing device.
  • the device includes a first determining module, which determines the set of active vertices currently participating in data processing in the target graph data.
  • the target graph data is generated in advance based on event information of a plurality of associated target events
  • the event information includes a plurality of event elements corresponding to the target event
  • each vertex of the target graph data corresponds to one of the event elements
  • each edge of the target graph data connects the vertices with an association relationship.
  • the device also includes a second determination module, if any active vertex in the set of active vertexes is stored in the external memory of the first distributed node, then determine the number of preset data processing modes corresponding to the set of active vertexes Matching target data processing schema.
  • the device further includes a third determining module, which determines the vertex to be updated that has the association relationship with any active vertex according to the target data processing mode.
  • the device also includes a sending module, which sends a first update message to the target distributed node where the vertex to be updated is located according to the first data of any active vertex in the external memory, so that the target distributed node Perform update processing on the second data of the vertex to be updated in the external memory according to the first update message.
  • An embodiment of this specification provides a distributed data processing device.
  • the device includes a processor.
  • the device also includes a memory arranged to store computer-executable instructions.
  • the computer-executable instructions when executed, cause the processor to determine a set of active vertices in target graph data currently participating in data processing.
  • the target graph data is generated in advance based on event information of a plurality of associated target events, the event information includes a plurality of event elements corresponding to the target event, and each vertex of the target graph data corresponds to one of the event elements , each edge of the target graph data is connected to the vertex with an association relationship; if any active vertex in the active vertex set is stored in the external memory of the first distributed node, then a plurality of preset data is determined A target data processing mode that matches the active vertex set in the processing mode; according to the target data processing mode, determine the vertex to be updated that has the association relationship with any active vertex; For the first data of any active vertex, send a first update message to the target distributed node where the vertex to be updated is located, so that the target distributed node can update the data in the external storage according to the first update message The second data of the vertex to be updated is updated.
  • An embodiment of this specification provides a storage medium.
  • the storage medium is used to store computer-executable instructions.
  • the computer-executable instructions when executed by the processor, determine a set of active vertices in the target graph data that are currently participating in data processing.
  • the target graph data is generated in advance based on event information of a plurality of associated target events, the event information includes a plurality of event elements corresponding to the target event, and each vertex of the target graph data corresponds to one of the event elements , each edge of the target graph data is connected to the vertex with an association relationship; if any active vertex in the active vertex set is stored in the external memory of the first distributed node, then a plurality of preset data is determined
  • a target data processing mode that matches the active vertex set in the processing mode; according to the target data processing mode, determine the vertex to be updated that has the association relationship with any active vertex; For the first data of any active vertex, send a first update message to the target distributed node where the vertex to be
  • FIG. 1 is a schematic diagram of a graph data provided by an embodiment of this specification
  • FIG. 2 is a schematic diagram of a scene of a distributed data processing method provided by an embodiment of this specification
  • FIG. 3 is a first schematic flowchart of a distributed data processing method provided by an embodiment of this specification
  • Fig. 4a and Fig. 4b are schematic diagrams of graph data corresponding to sliced data provided by an embodiment of this specification;
  • FIG. 5 is a second schematic flowchart of a distributed data processing method provided by an embodiment of this specification.
  • Fig. 6 is a third schematic flowchart of a distributed data processing method provided by an embodiment of this specification.
  • FIG. 7 is a schematic flowchart of a fourth distributed data processing method provided by an embodiment of this specification.
  • Fig. 8 is a fifth schematic flow chart of a distributed data processing method provided by an embodiment of this specification.
  • FIG. 9 is a sixth schematic flow diagram of a distributed data processing method provided by an embodiment of this specification.
  • FIG. 10 is a schematic flowchart of a seventh distributed data processing method provided by an embodiment of this specification.
  • Fig. 11 is an eighth schematic flow chart of a distributed data processing method provided by an embodiment of this specification.
  • FIG. 12 is a schematic diagram of the module composition of a distributed data processing device provided by an embodiment of this specification.
  • FIG. 13 is a schematic structural diagram of a distributed data processing device provided by an embodiment of this specification.
  • big data is converted into the form of a graph for processing, so as to associate and centrally process data with related relationships.
  • corresponding graph data is generated according to event information of multiple associated events.
  • the graph data includes multiple vertices (vertex) and multiple edges (edge);
  • the event information includes multiple event elements of the corresponding event; each vertex of the graph data corresponds to an event element, and each edge of the graph data is connected with an associated Two vertices connected by an edge are neighbor vertices.
  • u and v represent any two vertices in V, namely u, v ⁇ V.
  • Edges can be directed or undirected.
  • An edge with a direction can be called a directed edge, and an edge without a direction can be called an undirected edge.
  • a graph that includes directed edges may be called a directed graph, and a graph that includes undirected edges may be called an undirected graph.
  • the directed edge is from the in point to the out point, where the in point can also be called the source vertex (source, or src for short), and the out point can also be called the destination vertex (destination, or dst for short), and the in point will be used in the following text and the way of exit point are described.
  • a directed edge can be called an Out-edge of an In-point; for an Out-point, a directed edge can be called an In-edge of an Out-point.
  • vertex u edge e is the outgoing edge
  • edge e is the incoming edge.
  • edge e can be an undirected edge between vertex u and vertex v.
  • Edge e can be transformed into two directed edges e1 and e2.
  • graph data including directed edges is used for illustration.
  • processing method of graph data including undirected edges please refer to the graph data including directed edges. processing method.
  • FIG. 1 the embodiment of this specification provides a schematic diagram of graph data including directed edges, as shown in FIG. 1 . It can be seen from FIG. 1 that the graph data includes 10 vertices and 16 edges. In order to facilitate the description of each vertex, serial numbers are used to distinguish them in this specification.
  • FIG. 2 is a schematic diagram of an application scenario of a distributed data processing method provided by an embodiment of this specification.
  • the scenario includes: n distributed nodes in a distributed system; where n is an integer greater than 1;
  • the distributed nodes can be terminal devices such as mobile phones, tablet computers, desktop computers, portable notebook computers, etc. (only desktop computers are shown in FIG. 2 ); the distributed nodes can be servers.
  • the target graph data is generated in advance based on event information of multiple associated target events.
  • the target graph data includes multiple nodes and multiple edges; the event information includes multiple event elements corresponding to the target event; each vertex of the target graph data corresponds to an event element, and each edge of the target graph data connects vertex.
  • the distributed node 0 in the distributed system is referred to as the first distributed node.
  • the first distributed node iteratively determines the set of active vertices currently participating in data processing in the target graph data.
  • the first distributed node determines the target data processing mode that matches the active vertex set among the preset multiple data processing modes; the first distributed node determines according to the target data processing mode A vertex to be updated that has an association relationship with the arbitrary active vertex; and, according to the first data of the arbitrary active vertex in the external storage of the first distributed node, send a first update to the target distributed node where the determined vertex to be updated is located message, so that the target distributed node updates the second data of the vertex to be updated in its external memory according to the first update message.
  • the first distributed node is not limited to the distributed node 0, and it may be any distributed node in the distributed system.
  • Each distributed node in the distributed system processes the target graph data according to the above-mentioned data processing method of the first distributed node.
  • the target graph data is generated based on the event information of multiple related target events, and data processing is performed based on the target graph data, which not only realizes the effective correlation of data in big data and the centralized processing of related data, but also improves Data processing efficiency.
  • the distributed data processing system supports multiple data processing modes. By determining the target data processing mode that matches the current active vertex set, not only the data processing performance of the distributed data processing system is improved, but also the data processing efficiency is further improved. .
  • FIG. 3 is a schematic flowchart of a distributed data processing method provided by an embodiment of this specification.
  • the method in FIG. 3 can be executed by the first distributed node in FIG. 2 .
  • the method includes the following steps.
  • Step S102 determine the active vertex set currently participating in data processing in the target graph data; wherein, the target graph data is generated based on event information of multiple associated target events in advance, and the event information includes multiple event elements of the corresponding target event; the target graph Each vertex of the data corresponds to an event element, and each edge of the target graph data connects vertices with an associated relationship; since there is a certain dependency between the event elements of multiple associated target events, that is, at least one event element is based on The other at least one event element changes.
  • the vertex corresponding to the other at least one event element is referred to as an active vertex participating in data processing, and the active vertex forms an active vertex set. It can be understood that the set of active vertices may include one or more active vertices.
  • one processing of the target graph data usually includes multiple rounds of iterations.
  • the user can operate a distributed node to input the active vertex set, and when the distributed node obtains the active vertex set input by the user, it sends The related message of the active vertex set, so that each distributed node determines the active vertex set corresponding to the related message as the active vertex set currently participating in data processing in the target graph data.
  • the active vertex set can be automatically updated according to the processing results.
  • the update process of the active vertex set please refer to the related description below.
  • one processing of the target graph data may only include one iteration. For example, after the first iteration, if it is determined that there is no active vertex set, it is determined that the processing of the target graph data ends.
  • the target event can be set as required in practical applications, which is not specifically limited in this specification. It can be understood that the event information and event elements may vary from target event to event.
  • the target event may be a resource transfer event.
  • the event element may be a resource transfer-out account, a resource transfer-in account, etc.
  • the event information may include the account information of the resource transfer-out account, the resource The account information of the transfer-in account, etc., and the edge can represent the transfer path of the resource.
  • the target event may be a citation event of an academic document (such as a paper, a patent document, etc.), and correspondingly, the event element may be a citing academic document, a cited academic document, etc., and the event information may include the Document information of citing academic documents, document information of cited academic documents, etc.; edges can represent the citation relationship between different academic documents.
  • the target event may be a product flow event, and correspondingly, the event elements may include the product outflow place, the product inflow place, etc., and the event information may include the product outflow place information, the product inflow place information, etc. Information, etc.; edges can represent the circulation path of products.
  • each event element can be divided into multiple levels.
  • the event element "product outflow place" can include the first-level product outflow place (such as the manufacturer of the product), the second-level product outflow place ( For example, a warehouse in a certain province), a third-level product outflow location (such as a certain city’s warehouse), a fourth-level product outflow location (such as a certain city’s warehouse in a certain area), and a fifth-level product outflow location (such as a certain city in a certain area. Point of sale) wait.
  • the first-level product outflow place such as the manufacturer of the product
  • the second-level product outflow place For example, a warehouse in a certain province
  • a third-level product outflow location such as a certain city’s warehouse
  • a fourth-level product outflow location such as a certain city’s warehouse in a certain area
  • a fifth-level product outflow location such as a certain city in a certain area. Point of sale
  • Step S104 if any active vertex in the active vertex set is stored in the external storage of the first distributed node, then determine a target data processing mode matching the active vertex set among the preset multiple data processing modes.
  • the target graph data is divided in advance, so that the vertices and edges of the target graph data are dispersed Stored in the external memory of each distributed node, when the first distributed node determines that any active vertex in the active vertex set is stored in its external memory, subsequent processing is performed.
  • the active vertex set includes vertex 1, vertex 3, and vertex 4; vertices 3 and vertex 5 are stored in the external memory of the first distributed node, and then the first distributed node continues to perform subsequent operations to update the relevant data of vertex 3 to process.
  • Step S106 determine the vertex to be updated that has an association relationship with any active vertex.
  • Step S108 according to the first data of any active vertex in the external memory, send a first update message to the target distributed node where the vertex to be updated is located, so that the target distributed node The second data of the update vertex is updated.
  • the target graph data is generated based on the event information of multiple related target events, and data processing is performed based on the target graph data, which not only realizes the effective correlation of data in big data and the centralized processing of related data, avoids data omission, but also enables Improve data processing efficiency.
  • the distributed data processing system supports multiple data processing modes. By determining the target data processing mode that matches the current active vertex set, not only the data processing performance of the distributed data processing system is improved, but also the data processing efficiency is further improved. .
  • the specified device can divide the target graph data according to the preset data division method, and send the fragmented data obtained by the division processing to the distributed data processing system Corresponding distributed nodes.
  • the following steps may also be included before step S102.
  • the fragmented data and attribute information of the target graph data sent by the specified device wherein, the fragmented data is obtained by dividing the target graph data by the specified device according to the preset data division method; the received fragmented data and attributes
  • the information is stored in the external memory of the first distributed node.
  • pre-allocate pre-processing authority to the distributed nodes in the distributed data processing system and the distributed node with the pre-processing authority divides and processes the target graph data and sends it to the distributed data processing system. Process each distributed node in the system. Correspondingly, the following steps may also be included before step S102.
  • the target graph data is divided according to the preset data division method to obtain the fragmented data to be allocated to each distributed node in the distributed system; the divided The attribute information of the piece data and the target graph data is sent to each distributed node in the distributed system, so that each distributed node saves the received piece data and attribute information in the external memory.
  • the first distributed node When the first distributed node receives the fragmented data and attribute information of the target graph data sent to itself, it saves the received fragmented data and attribute information in the external memory of the first distributed node.
  • the above fragmented data may include the divided vertex subset, the incoming edge set corresponding to the incoming edge of each vertex in the vertex subset, the outgoing edge set corresponding to the outgoing edge of each vertex in the vertex subset, and the set of outgoing edges corresponding to each vertex in the vertex subset.
  • Secondary data may include primary backups. Among them, the element data varies with event elements.
  • an event element is a resource transfer-out account
  • the element data may include the total amount of resources in the resource transfer-out account, the resource transfer-out data each time a resource is transferred out, and when It is used as resource transfer data when resources are transferred into accounts, etc.
  • the event element is a product outflow place
  • the element data may include the total amount of products in the product outflow place, the outflow quantity of the product, the inflow place of the product, and the like.
  • the attribute information of the target graph data may include the first number of edges in the target graph data, the second number of outgoing edges of each vertex in the target graph data, and the like.
  • the graph data shown in FIG. 1 is taken as the target graph data, and the distributed system includes distributed node 0 and distributed node 1 as an example for illustration.
  • the preset data partitioning method may be a continuous block partitioning method, and vertex 0 to vertex 4 are assigned to distributed node 0, and vertices 5 to 9 are assigned to distributed node 1.
  • the fragmented data stored in the external memory of the distributed node 0 may include the divided vertex subset V_0 ⁇ 0,1,2,3,4 ⁇ , the outgoing edge set Eout_0 ⁇ (0,1),(0, 2),(1,3),(1,4),(2,3),(2,4),(5,4) ⁇ Incoming edge set Ein_0 ⁇ (1,0),(2,0), (5,0),(6,0),(3,1),(4,1),(3,2),(4,2),(7,2),(9,4) ⁇ .
  • the fragmented data stored in the external memory of the distributed node 1 may include the divided vertex subset V_1 ⁇ 5,6,7,8,9 ⁇ , the outbound edge set Eout_1 ⁇ (0,5),(0,6), (2,7),(4,9),(5,7),(5,8),(6,7),(6,8),(7,9) ⁇ , incoming edge set Ein_1 ⁇ (4 ,5),(7,5),(8,5),(7,6),(8,6),(9,7) ⁇ .
  • each edge in the set of outgoing edges, can be expressed in such a way that the entry point is in front, and the exit point is behind; in the set of incoming edges, the expression method of no edge can be that the exit point is in front, and the entry point is behind; it is understandable What's more, they all represent the edges from the in-point to the out-point.
  • the fragmented data stored in the external memory of distributed node 0 and distributed node 1 can correspond to the graph data shown in Figure 4a and Figure 4b respectively, where the white vertex is the vertex corresponding to the main backup, that is, the vertex outside the distributed node
  • the vertices in the vertex subset saved in the memory; the black vertices represent the vertices corresponding to the mirror backup.
  • each distributed node can perform data processing in parallel, compared with Based on the existing stand-alone memory data processing method, it not only improves the data processing efficiency, but also reduces the data processing pressure of the stand-alone machine, which can meet the processing requirements of big data.
  • the target graph data can also be divided artificially, and the partition data and attribute information of the target graph data obtained by the division are preset in the external memory of each distributed node.
  • step S104 may include the following steps S104-2 and S104-4: Step S104-2, if any active vertex in the active vertex set is stored in the external memory of the first distributed node, Calculate the density of the active vertex set according to the preset calculation method; specifically, count the third number of active vertices in the active vertex set, and count the number of active vertexes in the active vertex set according to the second number saved in the external memory of the first distributed node.
  • the total number of outgoing edges of each active vertex is determined as the fourth number; according to a preset calculation method, the density of the active vertex set is calculated based on the third number and the fourth number.
  • calculating the density of the active vertex set based on the third number and the fourth number may include: calculating the sum of the third number and the fourth number, and determining the calculation result as the density of the active vertex set .
  • the determined active vertex set is ⁇ 0 ⁇ , that is, the active vertex set only includes vertex 0, and the active vertex in the active vertex set is counted
  • the third quantity is 1, according to the second quantity stored in the external memory of the first distributed node, the total number of outbound edges of each active vertex in the active vertex set is the total number of outbound edges of vertex 0, and the calculated active vertex
  • Step S104-4 according to the calculated density, determine the target data processing mode matching the active vertex set among the preset push data processing mode and pull data processing mode.
  • the comparison density determines whether the density of the active vertex set is not less than the comparison density; if so, then pull the data processing mode to determine is the target data processing mode; if not, the push data processing mode is determined as the target data processing mode.
  • the pushing data processing mode may also be called a push mode
  • the pulling data processing mode may also be called a pull mode.
  • Determining the comparison density according to the first quantity stored in the external memory of the first distributed node may include: calculating the ratio of the first quantity to a preset value, and determining the calculation result as the comparison density.
  • the processing efficiency of the active vertex set can be improved.
  • step S106 may include the following steps S106-2 to step S106-4: Step S106-2, if it is determined that the target data processing mode is push data
  • Step S106-2 if it is determined that the target data processing mode is push data
  • the distributed system includes the aforementioned distributed node 0 and distributed node 1 as an example. In the first round of iteration, the set of active vertices is ⁇ 0 ⁇ .
  • step S104 the distributed node 1 determines that any active vertex in the active vertex set is not saved in its external memory, and does not do any processing.
  • Distributed node 0 determines in step S104 that its external memory stores active vertices in the active vertex set, and determines that the target data processing mode is the push data processing mode according to the calculated density. Therefore, in step S106-2, according to Outgoing edge set and mirror backup stored in the external storage of distributed node 0, when vertex 0 is determined as the inpoint, the corresponding target outpoints are vertex 1, vertex 2, vertex 5 and vertex 6.
  • Step S106-4 determine the target out point as the vertex to be updated; for example, determine the vertex 1, vertex 2, vertex 5 and vertex 6 determined in step S106-2 as the vertex to be updated.
  • step S108 may include the following steps S108-2 to step S108-6: step S108-2, from the external memory of the first distributed node Obtain the first data of any active vertex saved; step S108-4, determine the target distributed node where the vertex to be updated and the mirror backup of the vertex to be updated are located; specifically, each vertex and the distributed node where the vertex is located can be established in advance Association relationship, and pre-establish the association relationship between the mirror backup of each vertex and the distributed node where the mirror backup is located; and save the established association relationship to each distributed node, and the first distributed node determines according to the association relationship The target distributed node where the vertex to be updated is located and the target distributed node where the image backup of the vertex to be updated is located.
  • distributed node 0 determines that vertex 1 and vertex 2 to be updated are in distributed node 0, vertex 5 and vertex 6 to be updated are in distributed node 1, and vertex 0 is determined according to the saved association relationship.
  • the image backup of vertex 2 is in distributed node 1, and the image backups of vertex 5 and vertex 6 are in distributed node 0, then distributed node 0 and distributed node 1 are determined as target distributed nodes.
  • Step S108-6 according to the saved vertex information of any active vertex and the first data of the active vertex in the external storage, send a first update message to the target distributed node, so that the target distributed node The second data of the vertex to be updated in the external memory is updated.
  • the vertex information may be a vertex identifier, or an event element corresponding to the vertex.
  • the active vertex can correspond to the resource transfer-out account
  • the vertex information of the active vertex can be the account information of the resource transfer-out account
  • the vertex to be updated can correspond to the resource transfer-in account
  • the first data can include For data such as the number of resources transferred to each resource transfer account, it should be pointed out that during the first iteration, the distributed node operated by the user can save the first data input by the user to the active vertex in its external memory. primary backup.
  • distributed node 0 according to the vertex information of vertex 0, the number of resources transferred to the resource transfer account corresponding to vertex 1, and the number of resources transferred to the resource transfer account corresponding to vertex 2 , the number of resources transferred to the resource transfer-in account corresponding to vertex 5, the number of resources transferred to the resource transfer-in account corresponding to vertex 6, etc., send the first update message to distributed node 0; and distributed node 0 according to Vertex information of vertex 0, the number of resources transferred to the resource transfer-in account corresponding to vertex 2, the number of resources transferred to the resource transfer-in account corresponding to vertex 5, and the number of resources transferred to the resource transfer-in account corresponding to vertex 6 etc., send the first update message to distributed node 1.
  • the target data processing mode is determined to be the push data processing mode
  • data update processing can be performed along the outgoing edge of the active vertex, realizing the data update of the target out-point corresponding to the active vertex as the in-point.
  • the method may further include the following steps S110 to S114.
  • Step S110 if the first update message sent by the first distributed node and/or other distributed nodes is received, the active vertex corresponding to the vertex information in the first update message is determined as the target active vertex;
  • active vertices may be stored in multiple distributed nodes. Therefore, the first distributed node may only receive the first update message sent by the first distributed node, or may only receive the update message sent by other distributed nodes. It is also possible to receive the first update message sent by the first distributed node and other distributed nodes at the same time.
  • Step S112 according to the set of outgoing edges stored in the external memory of the first distributed node, determine at least one target outgoing point corresponding to the target active vertex as the incoming point;
  • Step S114 according to the first data in the first update message, Perform update processing on the second data of the target out point in the external memory of the first distributed node.
  • the determined target out point is the aforementioned vertex to be updated.
  • distributed node 0 determines vertex 0 as the target active vertex according to the received first update message, and determines the target The output points are vertex 1 and vertex 2.
  • the second data of vertex 1 and vertex 2 in the external storage of distributed node 0 are updated, for example, at vertex 1 and vertex 2 respectively. Add resource transfer data to the second data of 2, and update the total amount of resources in the resource accounts corresponding to vertex 1 and vertex 2, etc.
  • distributed node 1 determines vertex 0 as the target active vertex, and according to the set of outgoing edges stored in the external storage of distributed node 1, the determined target outgoing points are vertex 5 and vertex 6, according to The corresponding first data in the first update message updates the second data of vertex 5 and vertex 6 in the external storage of distributed node 1, for example, adding resource transfer to the second data of vertex 5 and vertex 6 respectively data.
  • the distributed node Because in the push data processing mode, when the distributed node receives the first update message, it only updates the second data of the target out-point (that is, the vertex to be updated) determined based on its out-edge set, without The image backup corresponding to the first update message is updated (for example, the distributed node 0 does not need to update the image backups of nodes 5 and 6 in its external memory), therefore, memory usage and synchronization time are saved, and the first The relevant data of the vertex to be updated corresponding to the image backup in the update message can be used in real time without disk storage, which saves the huge cost of external memory IO.
  • step S114 may include the following steps S114-2 and S114-4.
  • Step S114-2 determine the target thread corresponding to each target exit point.
  • the corresponding relationship between the vertex information and the thread information of each vertex is established in advance, and the corresponding relationship is saved in the external storage of the corresponding distributed node; the distributed node determines the target thread corresponding to each target exit point according to the corresponding relationship .
  • Step S114-4 sending the first update message to the corresponding target thread, so that the target thread can update the second data of the corresponding target exit point in the external storage of the first distributed node according to the first data in the first update message Perform update processing.
  • step S114-4 may include: determining whether the second data of the target out point is locked state; if so, the first update message is stored in the message queue of the corresponding target out point, so that the target thread corresponding to the target out point obtains the first update message from the corresponding message queue after unlocking the second data ; and after locking the second data, update the second data according to the first data in the first update message.
  • the first update message is sent to the corresponding target thread, so that after the target thread locks the second data of the target out point, the second data of the target out point is updated according to the first data in the first update message.
  • the second data is updated, and after the update is completed, the second data is unlocked.
  • step S116 if the first update message is received, after the second data update process of the corresponding vertex to be updated is completed according to the first update message, determine whether a preset iteration stop condition is satisfied.
  • the update processing process according to the first update message can refer to the implementation process of the aforementioned step S110 to step S114; when the target data processing mode is the pull data processing mode, according to the first update message
  • the update processing process of an update message refer to the related descriptions below; the repetitive parts will not be repeated here.
  • the preset iteration stop condition can be set according to the needs in practical applications. For example, when the number of iterations reaches the preset iteration number threshold, it is determined that the preset iteration stop condition is met; another example is when it is determined that the active vertex set is empty. , confirm that the preset iteration stop condition is satisfied. For another example, two conditions, the threshold value of the number of iterations and the set of active vertices being empty, are preset, and when one of them is satisfied, it is determined that the preset iteration stop condition is met.
  • Step S118 if not, determine the vertex to be updated as a new active vertex, and send a second update message to each distributed node in the distributed system according to the vertex information of the new active vertex.
  • Step S120 receiving the second update message sent by the target distributed node, determining the new active vertex corresponding to the received second update message as the active vertex set currently participating in data processing in the target graph data, and returning to step S104.
  • the first distributed node receives the second update message sent by the first distributed node; when the first distributed node When the determined target distributed nodes include the first distributed node and other distributed nodes in the distributed system, the first distributed node receives the second update message sent by the first distributed node and the other distributed nodes; when the second When the target distributed nodes determined by a distributed node only include other distributed nodes in the distributed system, the first distributed node receives the second update message sent by the other distributed nodes.
  • distributed node 0 determines vertex 1 and vertex 2 as new active vertices after updating the second data of vertex 1 and vertex 2, and according to vertex 1 and vertex 2
  • the vertex information of vertex 1 sends a second update message to each distributed node in the distributed system; after the distributed node 1 updates the second data of vertex 5 and vertex 6, it determines that vertex 5 and vertex 6 are new , and send a second update message to each distributed node in the distributed system according to the vertex information of vertex 5 and vertex 6; distributed node 0 and distributed node 1 are based on the received second update message Update the message, and determine the vertex set composed of vertex 1, vertex 2, vertex 5 and vertex 6 as the active vertex set currently participating in data processing in the target graph data; that is, the active vertex set in the second round of iteration is ⁇ 1, 2, 5, 6 ⁇ .
  • each distributed node in the distributed system can determine the set of active vertices currently participating in data processing in the target graph data in the next round of iterative processing, so as to perform subsequent processing.
  • step S106 may include the following steps S106-6 to step S106-10: step S106-6, if it is determined that the target data processing mode is pull data processing mode, then determine the exit point corresponding to each incoming edge in the set of incoming edges stored in the external memory of the first distributed node as the vertex to be processed; step S106-8, according to the stored in the external memory of the first distributed node The in-edge set, determine the corresponding target in-point when each vertex to be processed is used as the out-point; step S106-10, determine whether any active vertex saved includes the target in-point, if so, determine the corresponding vertex to be processed is the vertex to be updated.
  • the distributed node 0 determines that the active vertex set of the second round of iterative processing is ⁇ 1, 2, 5, 6 ⁇ .
  • Distributed node 0 determines that active vertex 1 and active vertex 2 are stored in its external memory, then calculates the density of the active vertex set, and determines that the target data processing mode is the pull data processing mode according to the density;
  • vertices 1 to 7 and vertex 9 are determined as vertices to be processed;
  • distributed node 0 determines the corresponding target incoming edge set when vertex 1 is used as the outgoing point according to the incoming edge set stored in its external memory. The point is vertex 0.
  • the active vertices currently saved by distributed node 0 are vertex 1 and vertex 2, and vertex 0 is not included, it is determined that vertex 1 is not the vertex to be updated; similarly, distributed node 0 determines vertex 2 as the exit point When the corresponding target entry point is vertex 0, since the current active vertex saved by distributed node 0 does not include vertex 0, it is determined that vertex 2 is not the vertex to be updated; when distributed node 0 determines that vertex 3 is the corresponding target The entry point is vertex 1 and vertex 2.
  • distributed node 0 Since the active vertices currently saved by distributed node 0 are vertex 1 and vertex 2, it is determined that vertex 3 is the vertex to be updated; distributed node 0 determines the corresponding target when vertex 4 is used as the exit point The entry point is vertex 1 and vertex 2. Since the active vertices currently saved by distributed node 0 are vertex 1 and vertex 2, it is determined that vertex 4 is the vertex to be updated; distributed node 0 determines the corresponding target when vertex 5 is used as the exit point The entry point is vertex 0.
  • the active vertex currently saved by distributed node 0 Since the active vertex currently saved by distributed node 0 does not include vertex 0, it is determined that vertex 5 is not the vertex to be updated; when distributed node 0 determines that vertex 6 is the exit point, the corresponding target entry point is vertex 0 , since the active vertex currently saved by distributed node 0 does not include vertex 0, it is determined that vertex 6 is not the vertex to be updated; when distributed node 0 determines that vertex 7 is used as the exit point, the corresponding target in-point is vertex 2, because distributed node If the active vertex currently saved by 0 includes vertex 2, it is determined that vertex 7 is the vertex to be updated; when distributed node 0 determines that vertex 9 is the exit point, the corresponding target entry point is vertex 4, because the active vertex currently saved by distributed node 0 If vertex 4 is not included, it is determined that vertex 9 is not a vertex to be updated.
  • the distributed node 1 determines that the active vertex set of the second round of iterative processing is ⁇ 1, 2, 5, 6 ⁇ .
  • Distributed node 1 determines that active vertex 5 and active vertex 6 are stored in its external memory, then calculates the density of the active vertex set, and determines that the target data processing mode is the pull data processing mode according to the density;
  • vertices 4, vertex 7 to vertex 9 are determined as vertices to be processed;
  • distributed node 1 determines the corresponding target incoming edge set when vertex 4 is used as the outgoing point according to the incoming edge set stored in its external memory. The point is vertex 5.
  • the active vertices currently saved by distributed node 1 are vertex 5 and vertex 6, including vertex 5, it is determined that vertex 4 is the vertex to be updated; similarly, when distributed node 1 determines vertex 7 as the exit point The corresponding target entry point is vertex 5 and vertex 6. Since the active vertices currently saved by distributed node 1 are vertex 5 and vertex 6, it is determined that vertex 7 is the vertex to be updated; when distributed node 1 determines that vertex 8 is used as the exit point The corresponding target entry points are vertex 5 and vertex 6.
  • vertex 8 is the vertex to be updated; when distributed node 1 determines that vertex 9 is the exit point The corresponding target entry point is vertex 7. Since the active vertex currently saved by distributed node 1 does not include vertex 7, it is determined that vertex 9 is not a vertex to be updated.
  • step S108 includes the following steps S108-8 and step S108-10: step S108-8, according to the The first data of any active vertex of , generate temporary update data corresponding to the vertex to be updated; step S108-10, according to the vertex information and temporary update data of the vertex to be updated, send the first update to the target distributed node where the vertex to be updated is located message, so that the target distributed node updates the second data of the vertex to be updated in its external memory according to the first update message.
  • distributed node 0 generates temporary update data of vertex 3 according to the first data of active vertex 1 and active vertex 2 stored in its external memory; according to the first data of active vertex 1 and active vertex 2 stored in its external The first data of vertex 4 is generated; the first data of active vertex 2 stored in its external memory is used to generate temporary update data of vertex 7; and, according to the vertex information of vertex 3 and vertex 4, the temporary update of vertex 3 is generated Data and the generated temporary update data of vertex 4, send the first update message to the distributed node 0 where vertex 3 and vertex 4 are located; according to the vertex information of vertex 7 and the generated temporary update data of vertex 7, send Distributed node 1 sends a first update message.
  • Distributed node 1 generates temporary update data of vertex 4 according to the first data of active vertex 5 stored in its external memory; generates vertex 7 respectively according to the first data of active vertex 5 and active vertex 6 stored in its external memory and the temporary update data of vertex 8; and, according to the vertex information of vertex 4 and the generated temporary update data of vertex 4, send the first update message to the distributed node 0 where vertex 4 is located; according to the vertex information of vertex 7 and vertex 8 , the generated temporary update data of vertex 7 and vertex 8, and send a first update message to the distributed node 1 where vertex 7 and vertex 8 are located.
  • the vertex to be updated and the temporary update data of the vertex to be updated are determined based on the incoming edge set, and the temporary update data is sent to the target distributed node where the vertex to be updated is located , realizing the data update of the vertex to be updated.
  • Step S122 if To the first update message sent by the first distributed node and/or other distributed nodes, then determine the vertex to be updated according to the vertex information in the first update message; step S124, according to the temporary update data in the first update message, to The second data of the vertex to be updated in the external memory of the first distributed node is updated.
  • distributed node 0 when distributed node 0 receives the first update message sent by distributed node 0, it determines that the vertices to be updated are vertex 3 and vertex 4, and updates its external storage according to the temporary update data in the first update message In the second data of vertex 3, and update the second data of vertex 4 in its external storage, for example, save the resource transfer data of the resource transfer account corresponding to vertex 3 included in the temporary update data to the second data of vertex 3, And update the total amount of resources in the resource account corresponding to vertex 3; save the resource transfer data of the resource transfer account corresponding to vertex 4 included in the temporary update data to the second data of vertex 4, and update the resource account corresponding to vertex 4 The total amount of resources in the And, when distributed node 0 receives the first update message sent by distributed node 1, it determines that the vertex to be updated is vertex 4, and continues to update the vertex 4 in its external memory according to the temporary update data in the first update message.
  • Second data Similarly, distributed node 1 determines that the vertices to be updated are vertices 7 and 8 according to the received first update messages, and updates the information of vertices 7 and 8 in its external memory according to the temporary update data in the first update message.
  • second data Second data.
  • step S124 the aforementioned steps S116 to S120 can still be performed to determine the active vertex set in the target graph data that participates in data processing during the third round of iterative processing.
  • distributed node 0 determines vertex 3 and vertex 4 as new active vertices, and sends a second update message to each distributed node in the distributed system according to the vertex information of vertex 3 and vertex 4;
  • Node 1 determines Vertex 7 and Vertex 8 as new active vertices, and sends a second update message to each distributed node in the distributed system according to the vertices information of Vertex 3 and Vertex 4;
  • Each second update message determines that the current set of active vertices participating in data processing is ⁇ 3, 4, 7, 8 ⁇ .
  • the first function and the second function are preset, and based on the first function and the second function, the aforementioned determination of the vertices to be updated and the second data update processing. Specifically, as shown in FIG.
  • step S106 may include the following step S1060
  • step S108 may include the following step S1080: Step S1060, call the first function, and determine the connection with the arbitrary active vertex based on the first function according to the target data processing mode A vertex to be updated with an association relationship; step S1080, based on the first function, according to the first data of any active vertex in the external storage of the first distributed node, send a first update message to the target distributed node where the vertex to be updated is located , so that the target distributed node calls the second function, and performs update processing on the second data of the vertices to be updated in its external memory according to the first update message based on the second function.
  • the first function may also be called a signal function (signal), and the second function may also be called a slot function (slot).
  • the target distributed nodes determined by the above-mentioned first distributed node include the first distributed node, they both send the first update message and the second update message to themselves; in practical applications, the first distributed node It is also possible not to send data to itself.
  • the target graph data is generated based on the event information of multiple related target events, and data processing is performed based on the target graph data, which not only realizes the effective correlation of data in big data and the centralized processing of related data, avoids data omission, but also enables Improve data processing efficiency.
  • the distributed data processing system supports multiple data processing modes. By determining the target data processing mode that matches the current active vertex set, not only the data processing performance of the distributed data processing system is improved, but also the data processing efficiency is further improved. .
  • Fig. 12 is a schematic diagram of the module composition of a distributed data processing device provided by an embodiment of this specification. As shown in Fig.
  • the device includes: a first determination module 201, which determines active vertices currently participating in data processing in the target graph data A collection; wherein, the target graph data is generated in advance based on event information of a plurality of associated target events, and the event information includes a plurality of event elements corresponding to the target event; each vertex of the target graph data corresponds to one of the An event element, each edge of the target graph data is connected to the vertex with an association relationship; the second determination module 202, if any active vertex in the active vertex set is stored in the external memory of the first distributed node, Then determine the target data processing mode that matches the active vertex set among the multiple preset data processing modes; the third determining module 203 determines that any active vertex has the association with the active vertex according to the target data processing mode The vertex to be updated of the relationship; the sending module 204, according to the first data of any active vertex in the external memory, sends a first update message to the target distributed node where the vertex to be updated is located, so
  • the apparatus further includes: a receiving module; the receiving module receives the attribute information of the slice data and the target graph data sent by the specified device, and the slice data is divided by the specified device according to preset data obtained by dividing and processing the target graph data; storing the fragmented data and the attribute information in the external memory of the first distributed node; or, the device further includes: a division module; the The division module, if it is determined that the first distributed node has preprocessing authority, divides the target graph data according to a preset data division method, and obtains the data to be allocated to each distributed node in the distributed system. ; and, sending the attribute information of the fragmented data and the target graph data to each distributed node in the distributed system, so that the distributed nodes will send the fragmented data and the attribute information is saved to the external memory.
  • a receiving module receives the attribute information of the slice data and the target graph data sent by the specified device, and the slice data is divided by the specified device according to preset data obtained by dividing and processing the target graph data;
  • the vertex includes an in-point and an out-point, and each edge in the target graph data is determined as a directed edge, and the directed edge is directed from the in-point to the out-point;
  • the directed edge is the outgoing edge of the entry point, and the directed edge is the incoming edge of the outgoing point;
  • the fragment data includes the divided vertex subsets, the corresponding information of the incoming edges of each vertex in the vertex subset.
  • the incoming edge set, the outgoing edge set corresponding to the outgoing edge of each vertex in the vertex subset, the primary backup of each vertex in the vertex subset, and each vertex in the vertex subset form the directed edge
  • the mirror backup of the vertex wherein, the main backup includes the element data of the event element corresponding to the corresponding vertex, and the mirror backup is used to transmit the message;
  • the attribute information includes the first number of edges of the target graph data, The second number of outgoing edges of each vertex in the target graph data.
  • the third determination module 203 if the target data processing mode is the push data processing mode, according to the outgoing edge set and the mirror image backup stored in the external memory of the first distributed node , determine the target out point corresponding to any active vertex as the in point; determine the target out point as the vertex to be updated; correspondingly, the sending module 204, from the first distributed Obtaining the first data of any active vertex in the external memory of the node; and determining the target distributed node where the vertex to be updated and the image backup of the vertex to be updated are located; according to the vertex information of any active vertex and For the first data, send a first update message to the target distributed node.
  • the device further includes: a first update module; if the first update module receives the first update message sent by the first distributed node and/or other distributed nodes, it will The active vertex corresponding to the vertex information in the first update message is determined as the target active vertex; and, according to the outgoing edge set stored in the external memory of the first distributed node, the target active vertex is determined as the target active vertex At least one target out point corresponding to the in point; according to the first data in the first update message, perform the second data of the target out point in the external memory of the first distributed node update processing.
  • a first update module if the first update module receives the first update message sent by the first distributed node and/or other distributed nodes, it will The active vertex corresponding to the vertex information in the first update message is determined as the target active vertex; and, according to the outgoing edge set stored in the external memory of the first distributed node, the target active vertex is determined as the target active vertex At least one target out point corresponding to the in point; according
  • the third determination module 203 if the target data processing mode is the pull data processing mode, then in the set of incoming edges saved in the external memory of the first distributed node, each incoming edge set The in-out point corresponding to the edge determines the vertex to be processed; and, according to the in-edge set stored in the external memory of the first distributed node, determines the corresponding target in-point of each vertex to be processed as an out-point ; Determine whether any active vertex includes the target in-point; if so, determine the corresponding vertex to be processed as the vertex to be updated.
  • the sending module 204 according to the first data of any active vertex in the external memory of the first distributed node, generates the temporary update data of the corresponding vertex to be updated; according to the vertex of the vertex to be updated information and the temporary update data, and send a first update message to the target distributed node where the vertex to be updated is located.
  • the device further includes: a second update module; if the update module receives the first update message sent by the first distributed node or other distributed nodes, then according to the first The vertex information in the update message determines the vertex to be updated; and, according to the temporary update data in the first update message, the second data of the vertex to be updated in the external storage of the first distributed node is performed update processing.
  • a second update module if the update module receives the first update message sent by the first distributed node or other distributed nodes, then according to the first The vertex information in the update message determines the vertex to be updated; and, according to the temporary update data in the first update message, the second data of the vertex to be updated in the external storage of the first distributed node is performed update processing.
  • the distributed data processing device determines the active vertex set currently participating in data processing in the target graph data, and if any active vertex in the active vertex set is stored in the external memory of the first distributed node, then determine The target data processing mode that matches the active vertex set among the preset multiple data processing modes; according to the target data processing mode, determine the vertex to be updated that has an association relationship with any active vertex; and, according to the any active vertex in the external memory For the first data of the active vertex, send the first update message to the target distributed node where the vertex to be updated is located, so that the target distributed node can update the second data of the vertex to be updated in its external storage according to the first update message .
  • the target graph data is generated based on the event information of multiple related target events, and data processing is performed based on the target graph data, which not only realizes the effective correlation of data in big data and the centralized processing of related data, avoids data omission, but also enables Improve data processing efficiency.
  • the distributed data processing system supports multiple data processing modes. By determining the target data processing mode that matches the current active vertex set, not only the data processing performance of the distributed data processing system is improved, but also the data processing efficiency is further improved. .
  • an embodiment of this specification also provides a distributed data processing device, which is used to execute the above-mentioned distributed data processing method, as shown in FIG. 13 A schematic structural diagram of a distributed data processing device provided by an embodiment of this specification.
  • distributed data processing devices may have relatively large differences due to different configurations or performances, and may include one or more processors 301 and memories 302, and one or more storage application programs or memory 302 may be stored in the memory 302 data.
  • the storage 302 may be a short-term storage or a persistent storage.
  • the application program stored in the memory 302 may include one or more modules (not shown in the figure), and each module may include a series of computer-executable instructions in a distributed data processing device.
  • the processor 301 may be configured to communicate with the memory 302, and execute a series of computer-executable instructions in the memory 302 on the distributed data processing device.
  • the distributed data processing device may also include one or more power sources 303, one or more wired or wireless network interfaces 304, one or more input and output interfaces 305, one or more keyboards 306, and the like.
  • the distributed data processing device includes a memory, and one or more programs, wherein one or more programs are stored in the memory, and one or more programs may include one or more modules, and each module may include a series of computer-executable instructions in a distributed data processing device and configured to be executed by one or more processors.
  • the one or more programs include computer-executable instructions for: An active vertex set for data processing; wherein, the target graph data is generated in advance based on event information of a plurality of associated target events, and the event information includes a plurality of event elements of the corresponding target event; each of the target graph data A vertex corresponds to one of the event elements, and each edge of the target graph data is connected to the vertex with an associated relationship; if any active vertex in the active vertex set is stored in the external memory of the first distributed node, then Determine the target data processing mode that matches the active vertex set among the multiple preset data processing modes; determine the vertex to be updated that has the association relationship with any active vertex according to the target data processing mode; For the first data of any active vertex in the external memory, send a first update message to the target distributed node where the vertex to be updated is located, so that the target distributed node can update the target node according to the first update message The second data of the vertex to be updated in the external memory is updated.
  • the determining the vertex to be updated that has the association relationship with any active vertex according to the target data processing mode includes: if the target data processing mode is determined For the push data processing mode, according to the set of outgoing edges and the mirror image backup stored in the external memory of the first distributed node, determine the corresponding target of any active vertex as the entry point The out point determines the target out point as the vertex to be updated; and according to the first data of any active vertex in the external memory, sends the first
  • the update message includes: obtaining the first data of any active vertex from the external memory of the first distributed node; determining the target distributed node where the vertex to be updated and the mirror backup of the vertex to be updated are located; Sending a first update message to the target distributed node according to the vertex information of any active vertex and the first data.
  • the method further includes: if receiving the first update message sent by the first distributed node and/or other distributed nodes, updating the The active vertex corresponding to the vertex information in an update message is determined as the target active vertex; according to the outbound edge set stored in the external memory of the first distributed node, the target active vertex is determined as the inpoint Corresponding to at least one target out point: performing update processing on the second data of the target out point in the external memory of the first distributed node according to the first data in the first update message.
  • the determining the vertex to be updated that has the association relationship with any active vertex according to the target data processing mode includes: if the target data processing mode is In the pull data processing mode, in the set of incoming edges stored in the external memory of the first distributed node, the entry and exit points corresponding to each incoming edge are determined to be processed vertices; according to the first distributed node The in-edge set stored in the external storage, determine the corresponding target in-point of each vertex to be processed as the out-point; determine whether the target in-point is included in any active vertex; if so, set The corresponding vertex to be processed is determined as a vertex to be updated.
  • a first update message is sent to the target distributed node where the vertex to be updated is located.
  • the method includes: generating temporary update data of the corresponding vertex to be updated according to the first data of any active vertex in the external memory of the first distributed node; , sending a first update message to the target distributed node where the vertex to be updated is located.
  • the distributed data processing device determines the active vertex set currently participating in data processing in the target graph data; if any active vertex in the active vertex set is stored in the external memory of the first distributed node, determine The target data processing mode that matches the active vertex set among the preset multiple data processing modes; according to the target data processing mode, determine the vertex to be updated that has an association relationship with any active vertex; and, according to the any active vertex in the external memory For the first data of the active vertex, send the first update message to the target distributed node where the vertex to be updated is located, so that the target distributed node can update the second data of the vertex to be updated in its external memory according to the first update message .
  • the target graph data is generated based on the event information of multiple related target events, and data processing is performed based on the target graph data, which not only realizes the effective correlation of data in big data and the centralized processing of related data, avoids data omission, but also enables Improve data processing efficiency.
  • the distributed data processing system supports multiple data processing modes. By determining the target data processing mode that matches the current active vertex set, not only the data processing performance of the distributed data processing system is improved, but also the data processing efficiency is further improved. .
  • an embodiment of this specification also provides a storage medium for storing computer-executable instructions.
  • the storage medium It can be a U disk, a CD, a hard disk, etc.
  • the following process can be realized: determine the active vertex set currently participating in data processing in the target graph data; wherein, the target The graph data is generated based on event information of a plurality of associated target events in advance, and the event information includes a plurality of event elements corresponding to the target event; each vertex of the target graph data corresponds to one of the event elements, and the target graph Each edge of the data is connected to the vertices that have an association relationship; if any active vertices in the active vertices set are stored in the external memory of the first distributed node, then it is determined that the plurality of preset data processing modes are related to the vertices.
  • the target data processing mode that matches the active vertex set; according to the target data processing mode, determine the vertex to be updated that has the association relationship with any active vertex; according to the any active vertex in the external storage
  • the first data is to send a first update message to the target distributed node where the vertex to be updated is located, so that the target distributed node assigns the first update message to the vertex to be updated in the external memory according to the first update message
  • the second data is updated.
  • the determining the vertex to be updated that has the association relationship with any active vertex according to the target data processing mode includes: if Determine that the target data processing mode is the push data processing mode, then determine any active vertex as the The target out point corresponding to the entry point; the target out point is determined as the vertex to be updated; according to the first data of any active vertex in the external memory, the vertex to be updated is sent to The target distributed node sends the first update message, including: obtaining the first data of any active vertex from the external storage of the first distributed node; determining the vertex to be updated and the mirror image of the vertex to be updated The target distributed node where the backup is located; according to the vertex information of any active vertex and the first data, send a first update message to the target distributed node.
  • the method further includes: if the first update sent by the first distributed node and/or other distributed nodes is received message, determine the active vertex corresponding to the vertex information in the first update message as the target active vertex; determine the target active vertex according to the outgoing edge set stored in the external storage of the first distributed node As at least one target out point corresponding to the in point; according to the first data in the first update message, the second target out point in the external storage of the first distributed node The data is updated.
  • the determining the vertex to be updated that has the association relationship with any active vertex according to the target data processing mode includes: if If the target data processing mode is the pulling data processing mode, in the set of incoming edges stored in the external memory of the first distributed node, the entry and exit points corresponding to each incoming edge are determined to be processed vertices; according to The in-edge set stored in the external storage of the first distributed node determines the corresponding target in-point of each vertex to be processed as an out-point; determines whether the target is included in any active vertex In point; if yes, then determine the corresponding vertex to be processed as the vertex to be updated.
  • the first data of any active vertex in the external memory is distributed to the target where the vertex to be updated is located.
  • the node sends the first update message, including: according to the first data of any active vertex in the external memory of the first distributed node, generating temporary update data of the corresponding vertex to be updated; according to the vertex of the vertex to be updated information and the temporary update data, and send a first update message to the target distributed node where the vertex to be updated is located.
  • the set of active vertices currently participating in data processing in the target graph data is determined; if there are active vertices stored in the external memory of the first distributed node For any active vertex in the set, determine the target data processing mode that matches the active vertex set among the preset multiple data processing modes; according to the target data processing mode, determine the vertex to be updated that has an association relationship with the arbitrary active vertex; And, according to the first data of any active vertex in the external storage, send a first update message to the target distributed node where the vertex to be updated is located, so that the target distributed node can use the first update message for its pending vertex in the external storage.
  • the second data of the update vertex is updated.
  • the target graph data is generated based on the event information of multiple related target events, and data processing is performed based on the target graph data, which not only realizes the effective correlation of data in big data and the centralized processing of related data, avoids data omission, but also enables Improve data processing efficiency.
  • the distributed data processing system supports multiple data processing modes. By determining the target data processing mode that matches the current active vertex set, not only the data processing performance of the distributed data processing system is improved, but also the data processing efficiency is further improved. .
  • a Programmable Logic Device such as a Field Programmable Gate Array (FPGA)
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller may be implemented in any suitable way, for example the controller may take the form of a microprocessor or processor and a computer readable medium storing computer readable program code (such as software or firmware) executable by the (micro)processor , logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers and embedded microcontrollers, examples of controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory.
  • controller in addition to realizing the controller in a purely computer-readable program code mode, it is entirely possible to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded The same function can be realized in the form of a microcontroller or the like. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as structures within the hardware component. Or even, means for realizing various functions can be regarded as a structure within both a software module realizing a method and a hardware component.
  • a typical implementing device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Combinations of any of these devices.
  • an embodiment of the present specification may be provided as a method, system or computer program product. Accordingly, an embodiment of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • Memory may include non-permanent storage in computer-readable media, in the form of random access memory (RAM) and/or nonvolatile memory such as read-only memory (ROM) or flash RAM.
  • RAM random access memory
  • ROM read-only memory
  • Memory is an example of computer readable media.
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridge, tape disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
  • Embodiments of this specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • An embodiment of the present specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本说明书实施例提供了一种分布式数据处理方法、装置及设备。其中方法包括:确定目标图数据中当前参与数据处理的活跃顶点集合;若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式;根据目标数据处理模式,确定与任意活跃顶点具有关联关系的待更新顶点;根据第一分布式节点的外存中的该任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。

Description

分布式数据处理 技术领域
本文件涉及数据处理技术领域,尤其涉及一种分布式数据处理方法、装置及设备。
背景技术
由于单机内的数据处理方式不涉及网络通信和数据交互,因此具有数据处理简单方便、性能高等优点。基于此优点,单机内的数据处理方式也成为了当前主流的数据处理方式。然而,随着互联网技术的飞速发展,各行各业的数据规模呈现剧增态势,数据处理需要较多资源,当前主流的单机内的数据处理方式由于资源有限,已无法满足当前大数据的处理需求。
发明内容
本说明书一实施例提供了一种分布式数据处理方法。该方法包括:确定目标图数据中当前参与数据处理的活跃顶点集合,其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素,所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
本说明书一实施例提供了一种分布式数据处理装置。该装置包括第一确定模块,确定目标图数据中当前参与数据处理的活跃顶点集合。其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素,所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点。该装置还包括第二确定模块,若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式。该装置还包括第三确定模块,根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点。该装置还包括发送模块,根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
本说明书一实施例提供了一种分布式数据处理设备。该设备包括处理器。该设备还包括被安排成存储计算机可执行指令的存储器。所述计算机可执行指令在被执行时使所述处理器确定目标图数据中当前参与数据处理的活跃顶点集合。其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素,所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
本说明书一实施例提供了一种存储介质。该存储介质用于存储计算机可执行指令。所述计算机可执行指令在被处理器执行时确定目标图数据中当前参与数据处理的活跃顶点集合。其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素,所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
附图说明
图1为本说明书一实施例提供的一种图数据的示意图;
图2为本说明书一实施例提供的一种分布式数据处理方法的场景示意图;
图3为本说明书一实施例提供的一种分布式数据处理方法的第一种流程示意图;
图4a和图4b为本说明书一实施例提供的一种分片数据对应的图数据的示意图;
图5为本说明书一实施例提供的一种分布式数据处理方法的第二种流程示意图;
图6为本说明书一实施例提供的一种分布式数据处理方法的第三种流程示意图;
图7为本说明书一实施例提供的一种分布式数据处理方法的第四种流程示意图;
图8为本说明书一实施例提供的一种分布式数据处理方法的第五种流程示意图;
图9为本说明书一实施例提供的一种分布式数据处理方法的第六种流程示意图;
图10为本说明书一实施例提供的一种分布式数据处理方法的第七种流程示意图;
图11为本说明书一实施例提供的一种分布式数据处理方法的第八种流程示意图;
图12为本说明书一实施例提供的一种分布式数据处理装置的模块组成示意图;
图13为本说明书一实施例提供的一种分布式数据处理设备的结构示意图。
具体实施方式
为了提升大数据的处理效率,本说明书中将大数据转换为图的形式进行处理,以对具有关联关系的数据进行关联和集中处理。具体的,根据多个关联事件的事件信息生成对应的图数据。其中,图数据包括多个顶点(vertex)和多条边(edge);事件信息包括相应事件的多个事件要素;图数据的每个顶点对应一个事件要素,图数据的每条边连接具有关联关系的顶点,由边相连的两个顶点互为邻居顶点。
图数据可以表示为G=(V,E),其中,G表示图数据,即一张图,V表示图G中所有顶点的集合,E表示图G中所有边的集合。u和v表示V中的任意两个顶点,即u,v∈V。任意两个顶点之间的边可以用e表示,例如顶点u和顶点v之间的边e可以表示为e=(u,v)。
边可以是有方向的,也可以是没方向的。有方向的边可以被称为有向边,没有方向的边可以被称为无向边。包括有向边的图可以被称为有向图,包括无向边的图可以被称为无向图。
有向边由入点指向出点,其中,入点还可以为称为源顶点(source,或简称src),出点还可以称为目标顶点(destination,或简称dst),后文中采用入点和出点的方式进行描述。对于入点来说,有向边可被称为入点的出边;对于出点来说,有向边可被称为出点的入边。例如,边e=(u,v)可以表示边e为由顶点u(入点)指向顶点v(出点)的一条有向边。对于顶点u来说,边e为出边,对于顶点v来说,边e为入边。
无向边可以转化为两个不同方向的有向边。例如,边e可以为顶点u和顶点v之间的一条无向边。边e可以转化为两条有向边e1和e2。其中,边e1可以为由顶点u指向顶点v的一条边,表示为e1=(u,v)。边e2可以为由顶点v指向顶点u的一条边,表示为e2=(v,u)。
由于无向边可以转换为有向边,故本说明书实施例中以包括有向边的图数据进行说明,对于包括无向边的图数据的处理方式可以参考该包括有向边的图数据的处理方式。为了便于理解,本说明书实施例提供了一种包括有向边的图数据的示意图,如图1所示,由图1可以看出,该图数据包括10个顶点和16条边。为了便于描述各顶点,本说明书中采用序号予以区分。
考虑到现有单机内的数据处理方式由于资源有限,而无法满足大数据的处理需求,基于此,本说明书实施例中采用分布式数据处理系统(以下简称为分布式系统)对上述图数据进行分布式处理。图2为本说明书一实施例提供的分布式数据处理方法的应用场景示意图,如图2所示,该场景包括:分布式系统中的n个分布式节点;其中,n为大于1的整数;分布式节点可以为手机、平板电脑、台式计算机、便携笔记本式计算机等(图2中仅示出台式计算机)等终端设备;分布式节点可以是服务器。
具体的,预先基于多个关联的目标事件的事件信息生成目标图数据。其中,目标图数据包括多个节点和多条边;事件信息包括相应目标事件的多个事件要素;目标图数据的每个顶点对应一个事件要素,目标图数据的每条边连接具有关联关系的顶点。为便于描述,将分布式系统中的分布式节点0称为第一分布式节点,该第一分布式节点迭代确定目标图数据中当前参与数据处理的活跃顶点集合,若确定第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式;第一分布式节点根据目标数据处理模式,确定与该任意活跃顶点具有关联关系的待更新顶点;以及,根据第一分布式节点外存中的该任意活跃 顶点的第一数据,向确定的待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。
可以理解的是,第一分布式节点不限为分布式节点0,其可以是分布式系统中的任一分布式节点。分布式系统中的每个分布式节点均按照上述第一分布式节点的数据处理方式对目标图数据进行处理。由此,由此,基于多个关联的目标事件的事件信息生成目标图数据,并基于目标图数据进行数据处理,不仅实现了大数据中数据的有效关联和关联数据的集中处理,而且能够提升数据的处理效率。通过采用分布式数据处理系统,并在每个分布式节点的外存中保存目标图数据的相关数据,不仅实现了对内存的有效扩展,而且由于分布式节点可以并行的进行数据处理,因此能够满足大数据的处理需求且保障数据处理效率。此外,分布式数据处理系统支持多种数据处理模式,通过确定与当前的活跃顶点集合相匹配的目标数据处理模式,不仅提升了分布式数据处理系统的数据处理性能,而且进一步提升了数据处理效率。
基于上述应用场景架构,本说明书一实施例提供了一种分布式数据处理方法。图3为本说明书一实施例提供的一种分布式数据处理方法的流程示意图,图3中的方法能够由图2中的第一分布式节点执行。如图3所示,该方法包括以下步骤。
步骤S102,确定目标图数据中当前参与数据处理的活跃顶点集合;其中,目标图数据是预先基于多个关联的目标事件的事件信息生成,事件信息包括相应目标事件的多个事件要素;目标图数据的每个顶点对应一个事件要素,目标图数据的每条边连接具有关联关系的顶点;由于多个关联的目标事件的各事件要素之间具有一定的依赖关系,即至少一个事件要素是基于另外的至少一个事件要素的改变而改变,本说明书中将该另外的至少一个事件要素所对应的顶点称为参与数据处理的活跃顶点,该活跃顶点组成活跃顶点集合。可以理解的是,活跃顶点集合中可以包括一个或多个活跃顶点。
通常的,由于目标图数据中包括多个顶点,因此对目标图数据的一次处理通常包括多轮迭代。对于第一轮迭代而言,用户可以操作某个分布式节点以输入活跃顶点集合,当该分布式节点获取到用户输入的活跃顶点集合后,向所在分布式系统中的每个分布式节点发送活跃顶点集合的相关消息,以使各分布式节点将该相关消息对应的活跃顶点集合,确定为目标图数据中当前参与数据处理的活跃顶点集合。在第一轮迭代处理完成后,即可自动根据处理结果更新活跃顶点集合,活跃顶点集合的更新过程可参见后文的相关描述。需要指出的是,对目标图数据的一次处理也可以仅包括一轮迭代,例如第一轮迭代后,确定不存在活跃顶点集合了,则确定对目标图数据的处理结束。
目标事件可以在实际应用中根据需要自行设定,对此本说明书不做具体限定。可以理解的是,事件信息和事件要素可以随目标事件的不同而不同。在一种可行的实施方式中,目标事件可以是资源转移事件,相应的,事件要素可以是资源转出账户、资源转入账户等,事件信息可以包括该资源转出账户的账户信息、该资源转入账户的账户信息等,边可以表示资源的转移路径。在另一种可行的实施方式中,目标事件可以是学术文件(例如论文、专利文献等)的引用事件,相应的,事件要素可以是引用学术文件、被引用学术文件等,事件信息可以包括该引用学术文件的文件信息、被引用学术文件的文件信息等;边可以表示不同学术文件之间的引用关系。在又一种可行的实施方式中,目标事件可以是产品的流转事件,相应的,事件要素可以包括产品流出地、产品流入地等,事件信息可以包括该产品流出地的信息、产品流入地的信息等;边可以表示产品的流转路径。对于目标事件的事件类型,本说明书中不再一一列举说明。可以理解的是,每个事件要素可以分为多个级别,以事件要素“产品流出地”为例进行说明,可以包括一级产品流出地(例如产品的生产厂家)、二级产品流出地(例如某省份的库房)、三级产品流出地(例如某城市的库房)、四级产品流出地(例如某城市某区域的库房)、五级产品流出地(例如某城市某区域的销售点)等。
步骤S104,若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式。
考虑到分布式节点的内存空间有限,为了实现对内存的扩展并提升数据处理效率,本说明书一实施例中,预先对目标图数据进行划分处理,以将目标图数据的各顶点和边分散的保存在各分布式节点的外存中,当第一分布式节点确定其外存中保存有活跃顶点集合中的任意活跃顶点时,进行后续处理。例如,活跃顶点集合包括顶点1、顶点3和顶点4;第一分布式节点的外存中保存有顶点3和顶点5,则第一分布式节点继续执行后续操作,以对顶点3的相关数据进行处理。
步骤S106,根据目标数据处理模式,确定与任意活跃顶点具有关联关系的待更新顶点。
本说明书提供的不同的数据处理模式中,对待更新顶点的确定方式不同,可参见后 文中的相关描述。
步骤S108,根据外存中的任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。
本说明书一实施例中,确定目标图数据中当前参与数据处理的活跃顶点集合,若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式;根据目标数据处理模式,确定与该任意活跃顶点具有关联关系的待更新顶点;以及,根据外存中的该任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。由此,基于多个关联的目标事件的事件信息生成目标图数据,并基于目标图数据进行数据处理,不仅实现了大数据中数据的有效关联和关联数据的集中处理,避免数据遗漏,而且能够提升数据的处理效率。通过采用分布式数据处理系统,并在每个分布式节点的外存中保存目标图数据的相关数据,不仅实现了对内存的有效扩展,而且由于分布式节点可以并行的进行数据处理,因此能够满足大数据的处理需求且保障数据处理效率。此外,分布式数据处理系统支持多种数据处理模式,通过确定与当前的活跃顶点集合相匹配的目标数据处理模式,不仅提升了分布式数据处理系统的数据处理性能,而且进一步提升了数据处理效率。
为了实现分布式数据处理,本说明书一实施例中,可以由指定设备按照预设的数据划分方式对目标图数据进行划分处理,并将划分处理得到的分片数据发送给分布式数据处理系统中相应的分布式节点。相应的,步骤S102之前还可以包括以下步骤。
接收指定设备发送的分片数据和目标图数据的属性信息;其中,该分片数据由指定设备按照预设的数据划分方式对目标图数据进行划分处理所得;将接收到的分片数据和属性信息保存至第一分布式节点的外存中。
本说明书一实施例中,还可以预先为分布式数据处理系统中的分布式节点分配预处理权限,并由具有该预处理权限的分布式节点对目标图数据进行划分处理后发送给分布式数据处理系统中的每个分布式节点。相应的,步骤S102之前还可以包括以下步骤。
若确定第一分布式节点具有预处理权限,则根据预设的数据划分方式对目标图数据进行划分处理,得到待分配给所在分布式系统中的每个分布式节点的分片数据;将分片数据和目标图数据的属性信息发送给分布式系统中的每个分布式节点,以使各分布式节点将接收到的分片数据和属性信息保存至外存中。
当第一分布式节点接收到其发送给自身的分片数据和目标图数据的属性信息时,将接收到的分片数据和属性信息保存至第一分布式节点的外存中。
上述的分片数据可以包括划分的顶点子集、该顶点子集中各顶点的入边所对应的入边集合、该顶点子集中各顶点的出边所对应的出边集合、该顶点子集中每个顶点的主备份、与该顶点子集中每个顶点构成有向边的顶点的镜像备份等;主备份包括相应顶点所对应的事件要素的要素数据,镜像备份用于传递消息。第二数据可以包括主备份。其中,要素数据随事件要素的不同而不同,例如事件要素为资源转出账户,要素数据可以包括该资源转出账户中的资源总量、每次转出资源时的资源转出数据,以及当其作为资源转入账户时的资源转入数据等。又如,事件要素为产品流出地,要素数据可以包括该产品流出地的产品总量,产品的流出数量、产品的流入地等。
目标图数据的属性信息可以包括目标图数据的边的第一数量、目标图数据中每个顶点的出边的第二数量等。
为便于理解,以图1所示的图数据为目标图数据,分布式系统包括分布式节点0和分布式节点1为例进行说明。预设的数据划分方式可以是连续块状划分方式,并将顶点0至顶点4划分给分布式节点0,将顶点5至顶点9划分给分布式节点1。相应的,分布式节点0的外存中保存的分片数据可以包括划分的顶点子集V_0{0,1,2,3,4},出边集合Eout_0{(0,1),(0,2),(1,3),(1,4),(2,3),(2,4),(5,4)}入边集合Ein_0{(1,0),(2,0),(5,0),(6,0),(3,1),(4,1),(3,2),(4,2),(7,2),(9,4)}。分布式节点1的外存中保存的分片数据可以包括划分的顶点子集V_1{5,6,7,8,9},出边集合Eout_1{(0,5),(0,6),(2,7),(4,9),(5,7),(5,8),(6,7),(6,8),(7,9)},入边集合Ein_1{(4,5),(7,5),(8,5),(7,6),(8,6),(9,7)}。其中,出边集合中,每条边的表示方式可以为入点在前,出点在后;入边集合中,没条边的表示方式可以为出点在前,入点在后;可以理解的是,其均表示由入点指向出点的边。分布式节点0和分布式节点1的外存中保存的分片数据可以分别对应图4a和图4b所示的图数据,其中,白色的顶点为主备份对应的顶点,也即分布式节点外存中保存的顶点子集中的顶点;黑色的顶点表示镜像备份对 应的顶点。
通过对目标图数据进行划分处理,并将划分得到的分片数据保存至分布式节点的外存中,不仅实现了对内存的有效扩展;而且各分布式节点能够并行的进行数据处理,相较于现有的单机内存的数据处理方式,既提升了数据处理效率,又降低了单机的数据处理压力,能够满足大数据的处理需求。需要指出的是,在实际应用中,也可以人为对目标图数据进行划分处理,并将划分得到的分片数据和目标图数据的属性信息预置在每个分布式节点的外存中。
为了提升当前参与数据处理的活跃顶点集合的处理效率,本说明书一实施例中,基于活跃顶点集合的稠密度确定相匹配的目标数据处理方式。具体的,如图5所示,步骤S104可以包括以下步骤S104-2和步骤S104-4:步骤S104-2,若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则根据预设的计算方式计算活跃顶点集合的稠密度;具体的,统计活跃顶点集合中活跃顶点的第三数量,根据第一分布式节点的外存中保存的第二数量统计活跃顶点集合中各活跃顶点的出边的总数量,将总数量确定为第四数量;根据预设的计算方式,基于第三数量和第四数量计算活跃顶点集合的稠密度。
其中,根据预设的计算方式,基于第三数量和第四数量计算活跃顶点集合的稠密度可以包括:计算第三数量与第四数量的加和,将计算结果确定为活跃顶点集合的稠密度。
本说明书实施例提供的各数据模式中,根据对目标图数据的搜索路径的不同,可以分为两种搜索方式:广度优先搜索和深度优先搜索,对于广度优先搜索和深度优先搜索的具体搜索过程可参考现有技术,这里不再赘述。本说明书中以广度优先搜索为例进行说明,例如,在第一轮迭代过程中,确定的活跃顶点集合为{0},即活跃顶点集合仅包括顶点0,统计活跃顶点集合中活跃顶点的第三数量为1,根据第一分布式节点的外存中保存的第二数量统计活跃顶点集合中各活跃顶点的出边的总数量为顶点0的出边的总数量4,计算得到的活跃顶点集合的稠密度为1+4=5。又如,在第一轮迭代结束后,基于如1所示的目标图数据确定的第二轮迭代对应的活跃顶点集合为{1,2,5,6},即活跃顶点集合仅包括顶点1、顶点2、顶点5和顶点6,统计的第三数量为4,第四数量为10,计算的稠密度为4+10=14。
步骤S104-4,根据计算的稠密度,确定预设的推动数据处理模式和拉动数据处理模式中与活跃顶点集合相匹配的目标数据处理模式。
具体的,根据第一分布式节点的外存中保存的第一数量确定比对稠密度,并确定活跃顶点集合的稠密度是否不小于该比对稠密度;若是,则将拉动数据处理模式确定为目标数据处理模式;若否,则将推动数据处理模式确定为目标数据处理模式。其中,推动数据处理模式还可以称为push模式,拉动数据处理模式还可以称为pull模式。
根据第一分布式节点的外存中保存的第一数量确定比对稠密度,可包括:计算第一数量与预设值的比例,并将计算结果确定为比对稠密度。预设值可以在实际应用中根据目标图数据的规模自行设定,例如,基于图1所示的目标图数据,预设值为2,则第一轮迭代过程中,计算的比对稠密度为16/2=8,由于计算的活跃顶点集合的稠密度5小于8,因此确定目标数据处理模式为推动数据处理模式。又如,在第二轮迭代过程中,计算的活跃顶点集合的稠密度14大于8,因此确定目标数据处理模式为拉动数据处理模式。
通过计算活跃顶点集合的稠密度,并根据该稠密度确定目标数据处理模式,能够提升活跃顶点集合的处理效率。
进一步的,如图6所示,当目标数据处理模式为推动数据处理模式时,步骤S106可以包括以下步骤S106-2至步骤S106-4:步骤S106-2,若确定目标数据处理模式为推动数据处理模式,则根据第一分布式节点的外存中保存的出边集合和镜像备份,确定保存的任意活跃顶点作为入点时所对应的目标出点;仍以图1所示的图数据为目标图数据,分布式系统包括前述分布式节点0和分布式节点1为例进行说明,在第一轮迭代过程中,活跃顶点集合为{0},由于顶点0保存在分布式节点0中,因此在步骤S104中,分布式节点1确定其外存中未保存活跃顶点集合中的任意活跃顶点,不做任何处理。分布式节点0在步骤S104中确定其外存中保存有活跃顶点集合中的活跃顶点,且根据计算的稠密度确定目标数据处理模式为推动数据处理模式,因此,在步骤S106-2中,根据分布式节点0的外存中保存的出边集合和镜像备份,确定顶点0作为入点时,对应的目标出点为顶点1、顶点2、顶点5和顶点6。
步骤S106-4,将目标出点确定为待更新顶点;例如,将步骤S106-2中确定的顶点1、顶点2、顶点5和顶点6确定为待更新顶点。
与上述步骤S106-2和步骤S106-4对应的,如图6所示,步骤S108可以包括以下步骤S108-2至步骤S108-6:步骤S108-2,从第一分布式节点的外存中获取保存的任意活跃顶点的第一数据;步骤S108-4,确定待更新顶点和待更新顶点的镜像备份所在的目标 分布式节点;具体的,可以预先建立每个顶点与顶点所在的分布式节点的关联关系,以及预先建立每个顶点的镜像备份与镜像备份所在分布式节点的关联关系;并将建立的关联关系保存至每个分布式节点中,第一分布式节点根据该关联关系,确定待更新顶点所在的目标分布式节点以及待更新顶点的镜像备份所在的目标分布式节点。
序接前述第一轮迭代的示例,分布式节点0根据保存的关联关系,确定待更新顶点1和顶点2在分布式节点0中,待更新顶点5和顶点6在分布式节点1中,顶点2的镜像备份在分布式节点1中,顶点5和顶点6的镜像备份在分布式节点0中,则将分布式节点0和分布式节点1确定为目标分布式节点。
步骤S108-6,根据保存的任意活跃顶点的顶点信息和外存中该活跃顶点的第一数据,向目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。
其中,顶点信息可以是顶点标识,还可以是顶点对应的事件要素。以目标事件为资源转移为例进行说明,活跃顶点可以对应资源转出账户,活跃顶点的顶点信息可以是资源转出账户的账户信息,待更新顶点可以对应资源转入账户,第一数据可以包括向每个资源转入账户转入的资源的数量等数据,需要指出的是,第一轮迭代时,用户操作的分布式节点可以将用户输入的第一数据保存至其外存中活跃顶点的主备份中。序接前述第一次迭代的示例,分布式节点0根据顶点0的顶点信息、向顶点1对应的资源转入账户转移的资源的数量、向顶点2对应的资源转入账户转移的资源的数量、向顶点5对应的资源转入账户转移的资源的数量、向顶点6对应的资源转入账户转移的资源的数量等,发送第一更新消息给分布式节点0;以及,分布式节点0根据顶点0的顶点信息、向顶点2对应的资源转入账户转移的资源的数量、向顶点5对应的资源转入账户转移的资源的数量、向顶点6对应的资源转入账户转移的资源的数量等,发送第一更新消息给分布式节点1。
由此,在确定目标数据处理模式为推动数据处理模式时,可以沿着活跃顶点的出边进行数据更新处理,实现了活跃顶点作为入点时所对应的目标出点的数据更新。
进一步的,由于第一分布式节点可能是目标分布式节点,因此,如图7所示,方法还可以包括以下步骤S110至步骤S114。
步骤S110,若接收到第一分布式节点和/或其他分布式节点发送的第一更新消息,则将第一更新消息中的顶点信息对应的活跃顶点确定为目标活跃顶点;由于在不同的迭代过程中,活跃顶点可能分散的保存在多个分布式节点中,因此,第一分布式节点可能仅接收到第一分布式节点发送的第一更新消息,也可能仅接收到其他分布式节点发送的第一更新消息,也可能同时接收到第一分布式节点和其他分布式节点发送的第一更新消息。
步骤S112,根据第一分布式节点的外存中保存的出边集合,确定目标活跃顶点作为入点时所对应的至少一个目标出点;步骤S114,根据第一更新消息中的第一数据,对第一分布式节点的外存中目标出点的第二数据进行更新处理。
其中,确定的目标出点即前述的待更新顶点。序接前述第一次迭代的示例,分布式节点0根据接收到的第一更新消息,将顶点0确定为目标活跃顶点,根据分布式节点0的外存中保存的出边集合,确定的目标出点为顶点1和顶点2,根据第一更新消息中相应的第一数据,对分布式节点0的外存中顶点1和顶点2的第二数据进行更新处理,例如分别在顶点1和顶点2的第二数据中新增资源转移数据,并更新顶点1和顶点2所对应的资源账户中的资源总量等。分布式节点1根据接收到的第一更新消息,将顶点0确定为目标活跃顶点,根据分布式节点1的外存中保存的出边集合,确定的目标出点为顶点5和顶点6,根据第一更新消息中相应的第一数据,对分布式节点1的外存中顶点5和顶点6的第二数据进行更新处理,例如分别在顶点5和顶点6的第二数据中新增资源转移数据。
由于在推动数据处理模式中,分布式节点在接收到第一更新消息时,仅对基于其出边集合所确定的目标出点(即待更新顶点)的第二数据进行更新处理,而无需对第一更新消息对应的镜像备份进行更新处理(例如,分布式节点0无需对其外存中节点5和节点6的镜像备份进行更新处理),因此,节省了内存使用与同步时间,并且第一更新消息中的镜像备份对应的待更新顶点的相关数据可实时使用而不需要落盘,节省了外存IO的巨大开销。
进一步的,考虑到对于同一分布式节点而言,可能会同时对多个待更新顶点的第二数据进行更新处理,为了消除推送数据处理模式引入的同步开销,并提升更新效率,本说明书一实施例,预先为每个顶点分配对应的线程,并基于该线程对相应顶点的第二数据进行更新处理。即步骤S114可以包括以下步骤S114-2和步骤S114-4。
步骤S114-2,确定每个目标出点对应的目标线程。
具体的,预先建立各顶点的顶点信息与线程信息的对应关系,并将该对应关系保存至相应分布式节点的外存中;分布式节点根据该对应关系确定每个目标出点对应的目标线程。
步骤S114-4,将第一更新消息发送给对应的目标线程,以使目标线程根据第一更新消息中的第一数据,对第一分布式节点的外存中相应目标出点的第二数据进行更新处理。
考虑到可能会出现多个活跃顶点同时更新同一个目标出点的情况,为了避免数据竞争,本说明书一实施例中,步骤S114-4可包括:确定目标出点的第二数据是否处于加锁状态;若是,则将第一更新消息保存至相应目标出点的消息队列中,以使目标出点对应的目标线程在对第二数据进行解锁处理后从对应的消息队列中获取第一更新消息;并对第二数据进行加锁处理后,根据第一更新消息中的第一数据对第二数据进行更新处理。
若否,则将第一更新消息发送给对应的目标线程,以使目标线程对目标出点的第二数据进行加锁处理后,根据第一更新消息中的第一数据对目标出点的第二数据进行更新处理,并在更新完成后对第二数据进行解锁处理。
也就是说,在推动数据处理模式中,对于每个线程而言,在对对应的第二数据进行更新处理时,首先对第二数据进行加锁处理,以使第二数据处于锁定状态;并在加锁处理后,对第二数据进行更新,以及在更新完成后,对第二数据进行解锁处理,以使第二数据处于解锁状态。只有在第二数据处于解锁状态时,才可进行下一次的更新处理。由此,通过对第二数据进行加锁和解锁,避免了多个活跃顶点同时对同一个目标出点的第二数据进行更新时的数据竞争。
当待更新顶点对应的第二数据更新完成后,与待更新顶点具有关联关系的其他顶点可能随之更新,为了实现各数据的有效更新,本说明书一实施例中,如图8所示,步骤S108之后,还可以包括以下步骤S116至步骤S120。
步骤S116,若接收到第一更新消息,则在根据该第一更新消息对相应待更新顶点的第二数据更新处理完成后,确定是否满足预设的迭代停止条件。
其中,当目标数据处理模式是推动数据处理模式时,根据第一更新消息进行更新处理的过程可参见前述步骤S110至步骤S114的实现过程;当目标数据处理模式是拉动数据处理模式时,根据第一更新消息进行更新处理的过程可参见后文的相关描述;重复之处这里不再赘述。
预设的迭代停止条件可以在实际应用中根据需要自行设定,例如当迭代次数到达预设的迭代次数阈值时,确定满足预设的迭代停止条件;又如,当确定活跃顶点集合为空时,确定满足预设的迭代停止条件。再如,预先设定迭代次数阈值和活跃顶点集合为空两个条件,当满足之一时,确定满足预设的迭代停止条件。
步骤S118,若否,则将待更新顶点确定为新的活跃顶点,根据新的活跃顶点的顶点信息,向所在分布式系统中的每个分布式节点发送第二更新消息。
步骤S120,接收目标分布式节点发送的第二更新消息,将接收到的第二更新消息所对应的新的活跃顶点,确定为目标图数据中当前参与数据处理的活跃顶点集合,返回步骤S104。
可以理解的是,当第一分布式节点确定的目标分布式节点仅包括第一分布式节点时,第一分布式节点接收第一分布式节点发送的第二更新消息;当第一分布式节点确定的目标分布式节点包括第一分布式节点和分布式系统中的其他分布式节点时,第一分布式节点接收第一分布式节点和该其他分布式节点发送的第二更新消息;当第一分布式节点确定的目标分布式节点仅包括分布式系统中的其他分布式节点时,第一分布式节点接收该其他分布式节点发送的第二更新消息。
序接前述第一轮迭代的示例,分布式节点0在对顶点1和顶点2的第二数据进行更新处理后,将顶点1和顶点2确定为新的活跃顶点,并根据顶点1和顶点2的顶点信息向所在分布式系统中的每个分布式节点发送第二更新消息;分布式节点1在对顶点5和顶点6的第二数据进行更新处理后,将顶点5和顶点6确定为新的活跃顶点,并根据顶点5和顶点6的顶点信息向所在分布式系统中的每个分布式节点发送第二更新消息;分布式节点0和分布式节点1均根据接收到的各第二个更新消息,将顶点1、顶点2、顶点5和顶点6组成的顶点集合确定为目标图数据中当前参与数据处理的活跃顶点集合;即在第二轮迭代过程中的活跃顶点集合为{1,2,5,6}。
由此,通过发送第二更新消息,能够使分布式系统中的每个分布式节点在下一轮迭代处理时,均能够确定目标图数据中当前参与数据处理的活跃顶点集合,从而进行后续处理。
进一步的,当目标数据处理模式为拉动数据处理模式,如图9所示,步骤S106可以包括以下步骤S106-6至步骤S106-10:步骤S106-6,若确定目标数据处理模式为拉动数 据处理模式,则将第一分布式节点的外存中保存的入边集合中,每条入边对应的出点确定为待处理顶点;步骤S106-8,根据第一分布式节点的外存中保存的入边集合,确定每个待处理顶点作为出点时所对应的目标入点;步骤S106-10,确定保存的任意活跃顶点中是否包括目标入点,若是,则将相应的待处理顶点确定为待更新顶点。
序接前述示例,在第一轮迭代处理结束后,分布式节点0确定第二轮迭代处理的活跃顶点集合为{1,2,5,6}。分布式节点0确定其外存中保存有活跃顶点1和活跃顶点2,则计算活跃顶点集合的稠密度,并根据稠密度确定目标数据处理模式为拉动数据处理模式;分布式节点0根据其外存中保存的入边集合,将顶点1至顶点7以及顶点9确定为待处理顶点;分布式节点0根据其外存中保存的入边集合,确定顶点1作为出点时所对应的目标入点为顶点0,由于分布式节点0当前保存的活跃顶点是顶点1和顶点2,并未包括顶点0,因此确定顶点1不是待更新顶点;同理,分布式节点0确定顶点2作为出点时所对应的目标入点为顶点0,由于分布式节点0当前保存的活跃顶点未包括顶点0,则确定顶点2不是待更新顶点;分布式节点0确定顶点3作为出点时所对应的目标入点为顶点1和顶点2,由于分布式节点0当前保存的活跃顶点是顶点1和顶点2,则确定顶点3是待更新顶点;分布式节点0确定顶点4作为出点时所对应的目标入点为顶点1和顶点2,由于分布式节点0当前保存的活跃顶点是顶点1和顶点2,则确定顶点4是待更新顶点;分布式节点0确定顶点5作为出点时所对应的目标入点为顶点0,由于分布式节点0当前保存的活跃顶点未包括顶点0,则确定顶点5不是待更新顶点;分布式节点0确定顶点6作为出点时所对应的目标入点为顶点0,由于分布式节点0当前保存的活跃顶点未包括顶点0,则确定顶点6不是待更新顶点;分布式节点0确定顶点7作为出点时所对应的目标入点为顶点2,由于分布式节点0当前保存的活跃顶点包括顶点2,则确定顶点7是待更新顶点;分布式节点0确定顶点9作为出点时所对应的目标入点为顶点4,由于分布式节点0当前保存的活跃顶点未包括顶点4,则确定顶点9不是待更新顶点。
在第一轮迭代处理结束后,分布式节点1确定第二轮迭代处理的活跃顶点集合为{1,2,5,6}。分布式节点1确定其外存中保存有活跃顶点5和活跃顶点6,则计算活跃顶点集合的稠密度,并根据稠密度确定目标数据处理模式为拉动数据处理模式;分布式节点1根据其外存中保存的入边集合,将顶点4、顶点7至顶点9确定为待处理顶点;分布式节点1根据其外存中保存的入边集合,确定顶点4作为出点时所对应的目标入点为顶点5,由于分布式节点1当前保存的活跃顶点是顶点5和顶点6,包括顶点5,因此确定顶点4是待更新顶点;同理,分布式节点1确定顶点7作为出点时所对应的目标入点为顶点5和顶点6,由于分布式节点1当前保存的活跃顶点是顶点5和顶点6,则确定顶点7是待更新顶点;分布式节点1确定顶点8作为出点时所对应的目标入点为顶点5和顶点6,由于分布式节点1当前保存的活跃顶点是顶点5和顶点6,则确定顶点8是待更新顶点;分布式节点1确定顶点9作为出点时所对应的目标入点为顶点7,由于分布式节点1当前保存的活跃顶点未包括顶点7,则确定顶点9不是待更新顶点。
与上述步骤S106-6至步骤S106-10对应的,如图9所示,步骤S108包括以下步骤S108-8和步骤S108-10:步骤S108-8,根据第一分布式节点的外存中保存的任意活跃顶点的第一数据,生成相应待更新顶点的临时更新数据;步骤S108-10,根据待更新顶点的顶点信息和临时更新数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。
序接前述示例,分布式节点0根据其外存中保存的活跃顶点1和活跃顶点2的第一数据,生成顶点3的临时更新数据;根据其外存中保存的活跃顶点1和活跃顶点2的第一数据,生成顶点4;其外存中保存的活跃顶点2的第一数据,生成顶点7的临时更新数据;以及,根据顶点3和顶点4的顶点信息、生成的顶点3的临时更新数据和生成的顶点4的临时更新数据,向顶点3和顶点4所在的分布式节点0发送第一更新消息;根据顶点7的顶点信息、生成的顶点7的临时更新数据,向顶点7所在的分布式节点1发送第一更新消息。
分布式节点1根据其外存中保存的活跃顶点5的第一数据,生成顶点4的临时更新数据;根据其外存中保存的活跃顶点5和活跃顶点6的第一数据,分别生成顶点7和顶点8的临时更新数据;以及,根据顶点4的顶点信息和生成的顶点4的临时更新数据,向顶点4所在的分布式节点0发送第一更新消息;根据顶点7和顶点8的顶点信息、生成的顶点7和顶点8的临时更新数据,向顶点7和顶点8所在的分布式节点1发送第一更新消息。
由此,当确定目标数据处理模式是拉动数据处理模型时,基于入边集合确定待更新 顶点以及待更新顶点的临时更新数据,并将该临时更新数据发送给待更新顶点所在的目标分布式节点,实现了该待更新顶点的数据更新。
进一步的,由于第一分布式节点可能是待更新顶点所在的目标分布式节点时,因此,如图10所示,步骤S108-10之后还可以包括以下步骤S122和步骤S124:步骤S122,若接收到第一分布式节点和/或其他分布式节点发送的第一更新消息,则根据第一更新消息中的顶点信息确定待更新顶点;步骤S124,根据第一更新消息中的临时更新数据,对第一分布式节点的外存中待更新顶点的第二数据进行更新处理。
序接前述示例,当分布式节点0接收到分布式节点0发送的第一更新消息时,确定待更新顶点为顶点3和顶点4,根据第一更新消息中的临时更新数据,更新其外存中顶点3的第二数据,以及更新其外存中顶点4的第二数据,例如将临时更新数据包括的顶点3所对应的资源转移账户的资源转移数据保存至顶点3的第二数据中,并更新顶点3对应的资源账户中的资源总量;将临时更新数据包括的顶点4所对应的资源转移账户的资源转移数据保存至顶点4的第二数据中,并更新顶点4对应的资源账户中的资源总量等。以及,当分布式节点0接收到分布式节点1发送的第一更新消息时,确定待更新顶点为顶点4,根据第一更新消息中的临时更新数据,继续更新其外存中顶点4的第二数据。同理,分布式节点1根据接收到的各第一更新消息,确定待更新顶点为顶点7和顶点8,根据第一更新消息中的临时更新数据,更新其外存中顶点7和顶点8的第二数据。
为了完成所有待更新数据的更新处理,在步骤S124之后,依然可以执行前述步骤S116至步骤S120,以确定第三轮迭代处理时,目标图数据中参与数据处理的活跃顶点集合。序接前述示例,则分布式节点0将顶点3和顶点4确定为新的活跃顶点,并根据顶点3和顶点4的顶点信息向分布式系统中每个分布式节点发送第二更新消息;分布式节点1将顶点7和顶点8确定为新的活跃顶点,并根据顶点3和顶点4的顶点信息向分布式系统中每个分布式节点发送第二更新消息;每个分布节点根据接收到的各第二更新消息,确定当前参与数据处理的活跃顶点集合为{3,4,7,8}。
为了更好的实现上述数据处理过程,本说明书一实施例中,预先设定第一函数和第二函数,并基于该第一函数和第二函数进行前述的待更新顶点的确定以及第二数据的更新处理。具体的,如图11所示,步骤S106可以包括以下步骤S1060,步骤S108可以包括以下步骤S1080:步骤S1060,调用第一函数,基于第一函数根据目标数据处理模式,确定与所述任意活跃顶点具有关联关系的待更新顶点;步骤S1080,基于第一函数根据第一分布式节点的外存中所述任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点调用第二函数,并基于第二函数根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。
其中,第一函数还可以称为信号函数(signal),第二函数还可以称为槽函数(slot)。
需要指出的是,上述第一分布式节点确定的目标分布式节点包括该第一分布式节点时,均向自身发送第一更新消息和第二更新消息;在实际应用中,第一分布式节点也可以不向自身发送数据。
本说明书一实施例中,确定目标图数据中当前参与数据处理的活跃顶点集合,若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式;根据目标数据处理模式,确定与该任意活跃顶点具有关联关系的待更新顶点;以及,根据外存中的该任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。由此,基于多个关联的目标事件的事件信息生成目标图数据,并基于目标图数据进行数据处理,不仅实现了大数据中数据的有效关联和关联数据的集中处理,避免数据遗漏,而且能够提升数据的处理效率。通过采用分布式数据处理系统,并在每个分布式节点的外存中保存目标图数据的相关数据,不仅实现了对内存的有效扩展,而且由于分布式节点可以并行的进行数据处理,因此能够满足大数据的处理需求且保障数据处理效率。此外,分布式数据处理系统支持多种数据处理模式,通过确定与当前的活跃顶点集合相匹配的目标数据处理模式,不仅提升了分布式数据处理系统的数据处理性能,而且进一步提升了数据处理效率。
对应上述描述的分布式数据处理方法,基于相同的技术构思,本说明书一实施例还提供一种分布式数据处理装置。图12为本说明书一实施例提供的一种分布式数据处理装置的模块组成示意图,如图12所示,该装置包括:第一确定模块201,确定目标图数据中当前参与数据处理的活跃顶点集合;其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素;所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系 的所述顶点;第二确定模块202,若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;第三确定模块203,根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;发送模块204,根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
可选地,所述装置还包括:接收模块;所述接收模块,接收指定设备发送的分片数据和目标图数据的属性信息,所述分片数据由所述指定设备按照预设的数据划分方式对所述目标图数据进行划分处理所得;将所述分片数据和所述属性信息保存至所述第一分布式节点的外存中;或者,所述装置还包括:划分模块;所述划分模块,若确定所述第一分布式节点具有预处理权限,则根据预设的数据划分方式对所述目标图数据进行划分处理,得到待分配给所在分布式系统中的每个分布式节点的分片数据;以及,将所述分片数据和所述目标图数据的属性信息发送给所述分布式系统中的每个分布式节点,以使所述分布式节点将所述分片数据和所述属性信息保存至外存中。
可选地,所述顶点包括入点和出点,将所述目标图数据中的每条边确定为有向边,所述有向边由所述入点指向所述出点;所述有向边是所述入点的出边,所述有向边是所述出点的入边;所述分片数据包括划分的顶点子集、所述顶点子集中各顶点的入边所对应的入边集合、所述顶点子集中各顶点的出边所对应的出边集合、所述顶点子集中每个顶点的所述主备份、与所述顶点子集中每个顶点构成所述有向边的顶点的镜像备份;其中,所述主备份包括相应顶点所对应的事件要素的要素数据,所述镜像备份用于传递消息;所述属性信息包括所述目标图数据的边的第一数量、所述目标图数据中每个顶点的出边的第二数量。
可选地,所述第三确定模块203,若所述目标数据处理模式为推动数据处理模式,则根据所述第一分布式节点的外存中保存的所述出边集合和所述镜像备份,确定所述任意活跃顶点作为所述入点时所对应的目标出点;将所述目标出点确定为所述待更新顶点;相应的,所述发送模块204,从所述第一分布式节点的外存中获取所述任意活跃顶点的第一数据;以及,确定所述待更新顶点和所述待更新顶点的镜像备份所在的目标分布式节点;根据所述任意活跃顶点的顶点信息和所述第一数据,向所述目标分布式节点发送第一更新消息。
可选地,所述装置还包括:第一更新模块;所述第一更新模块,若接收到所述第一分布式节点和/或其他分布式节点发送的所述第一更新消息,则将所述第一更新消息中的顶点信息对应的活跃顶点确定为目标活跃顶点;以及,根据所述第一分布式节点的外存中保存的所述出边集合,确定所述目标活跃顶点作为所述入点时所对应的至少一个目标出点;根据所述第一更新消息中的所述第一数据,对所述第一分布式节点的外存中所述目标出点的第二数据进行更新处理。
可选地,所述第三确定模块203,若所述目标数据处理模式为拉动数据处理模式,则将所述第一分布式节点的外存中保存的所述入边集合中,每条入边对应的出入点确定待处理顶点;以及,根据所述第一分布式节点的外存中保存的所述入边集合,确定每个所述待处理顶点作为出点时所对应的目标入点;确定所述任意活跃顶点中是否包括所述目标入点;若是,则将相应的所述待处理顶点确定为待更新顶点。
相应的,所述发送模块204,根据所述第一分布式节点的外存中的所述任意活跃顶点的第一数据,生成相应待更新顶点的临时更新数据;根据所述待更新顶点的顶点信息和所述临时更新数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息。
可选地,所述装置还包括:第二更新模块;所述更新模块,若接收到所述第一分布式节点或其他分布式节点发送的所述第一更新消息,则根据所述第一更新消息中的顶点信息确定所述待更新顶点;以及,根据所述第一更新消息中的临时更新数据,对所述第一分布式节点的外存中所述待更新顶点的第二数据进行更新处理。
本说明书一实施例提供的分布式数据处理装置,确定目标图数据中当前参与数据处理的活跃顶点集合,若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式;根据目标数据处理模式,确定与该任意活跃顶点具有关联关系的待更新顶点;以及,根据外存中的该任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。由此,基于多个关联的目标事件的事件信息生成目标图数据,并基于目标图数据进行数据处理,不仅实现了大数据中数据的有效关联和关联数据的集中处理,避免数 据遗漏,而且能够提升数据的处理效率。通过采用分布式数据处理系统,并在每个分布式节点的外存中保存目标图数据的相关数据,不仅实现了对内存的有效扩展,而且由于分布式节点可以并行的进行数据处理,因此能够满足大数据的处理需求且保障数据处理效率。此外,分布式数据处理系统支持多种数据处理模式,通过确定与当前的活跃顶点集合相匹配的目标数据处理模式,不仅提升了分布式数据处理系统的数据处理性能,而且进一步提升了数据处理效率。
需要说明的是,本说明书中关于分布式数据处理装置的实施例与本说明书中关于分布式数据处理方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的分布式数据处理方法的实施,重复之处不再赘述。
进一步地,对应上述描述的分布式数据处理方法,基于相同的技术构思,本说明书一实施例还提供一种分布式数据处理设备,该设备用于执行上述的分布式数据处理方法,图13为本说明书一实施例提供的一种分布式数据处理设备的结构示意图。
如图13所示,分布式数据处理设备可因配置或性能不同而产生比较大的差异,可包括一个或以上的处理器301和存储器302,存储器302中可以存储有一个或以上存储应用程序或数据。其中,存储器302可以是短暂存储或持久存储。存储在存储器302的应用程序可以包括一个或以上模块(图示未示出),每个模块可以包括分布式数据处理设备中的一系列计算机可执行指令。更进一步地,处理器301可以设置为与存储器302通信,在分布式数据处理设备上执行存储器302中的一系列计算机可执行指令。分布式数据处理设备还可以包括一个或以上电源303,一个或以上有线或无线网络接口304,一个或以上输入输出接口305,一个或以上键盘306等。
在一个具体的实施例中,分布式数据处理设备包括有存储器,以及一个或以上的程序,其中一个或者以上程序存储于存储器中,且一个或者以上程序可以包括一个或以上模块,且每个模块可以包括对分布式数据处理设备中的一系列计算机可执行指令,且经配置以由一个或者以上处理器执行该一个或者以上程序包含用于进行以下计算机可执行指令:确定目标图数据中当前参与数据处理的活跃顶点集合;其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素;所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
可选地,计算机可执行指令在被执行时,所述根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点,包括:若确定所述目标数据处理模式为所述推动数据处理模式,则根据所述第一分布式节点的外存中保存的所述出边集合和所述镜像备份,确定所述任意活跃顶点作为所述入点时所对应的目标出点将所述目标出点确定为所述待更新顶点;所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,包括:从所述第一分布式节点的外存中获取所述任意活跃顶点的第一数据;确定所述待更新顶点和所述待更新顶点的镜像备份所在的目标分布式节点;根据所述任意活跃顶点的顶点信息和所述第一数据,向所述目标分布式节点发送第一更新消息。
可选地,计算机可执行指令在被执行时,所述方法还包括:若接收到所述第一分布式节点和/或其他分布式节点发送的所述第一更新消息,则将所述第一更新消息中的顶点信息对应的活跃顶点确定为目标活跃顶点;根据所述第一分布式节点的外存中保存的所述出边集合,确定所述目标活跃顶点作为所述入点时所对应的至少一个目标出点;根据所述第一更新消息中的所述第一数据,对所述第一分布式节点的外存中所述目标出点的第二数据进行更新处理。
可选地,计算机可执行指令在被执行时,所述根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点,包括:若所述目标数据处理模式为所述拉动数据处理模式,则将所述第一分布式节点的外存中保存的所述入边集合中,每条入边对应的出入点确定待处理顶点;根据所述第一分布式节点的外存中保存的所述入边集合,确定每个所述待处理顶点作为出点时所对应的目标入点;确定所述任意活跃顶点中是否包括所述目标入点;若是,则将相应的所述待处理顶点确定为待更新顶点。
可选地,计算机可执行指令在被执行时,所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,包括:根据 所述第一分布式节点的外存中的所述任意活跃顶点的第一数据,生成相应待更新顶点的临时更新数据;根据所述待更新顶点的顶点信息和所述临时更新数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息。
本说明书一实施例提供的分布式数据处理设备,确定目标图数据中当前参与数据处理的活跃顶点集合;若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式;根据目标数据处理模式,确定与该任意活跃顶点具有关联关系的待更新顶点;以及,根据外存中的该任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。由此,基于多个关联的目标事件的事件信息生成目标图数据,并基于目标图数据进行数据处理,不仅实现了大数据中数据的有效关联和关联数据的集中处理,避免数据遗漏,而且能够提升数据的处理效率。通过采用分布式数据处理系统,并在每个分布式节点的外存中保存目标图数据的相关数据,不仅实现了对内存的有效扩展,而且由于分布式节点可以并行的进行数据处理,因此能够满足大数据的处理需求且保障数据处理效率。此外,分布式数据处理系统支持多种数据处理模式,通过确定与当前的活跃顶点集合相匹配的目标数据处理模式,不仅提升了分布式数据处理系统的数据处理性能,而且进一步提升了数据处理效率。
需要说明的是,本说明书中关于分布式数据处理设备的实施例与本说明书中关于分布式数据处理方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的分布式数据处理方法的实施,重复之处不再赘述。
进一步地,对应上述描述的分布式数据处理方法,基于相同的技术构思,本说明书一实施例还提供了一种存储介质,用于存储计算机可执行指令,一个具体的实施例中,该存储介质可以为U盘、光盘、硬盘等,该存储介质存储的计算机可执行指令在被处理器执行时,能实现以下流程:确定目标图数据中当前参与数据处理的活跃顶点集合;其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素;所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点,包括:若确定所述目标数据处理模式为所述推动数据处理模式,则根据所述第一分布式节点的外存中保存的所述出边集合和所述镜像备份,确定所述任意活跃顶点作为所述入点时所对应的目标出点;将所述目标出点确定为所述待更新顶点;所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,包括:从所述第一分布式节点的外存中获取所述任意活跃顶点的第一数据;确定所述待更新顶点和所述待更新顶点的镜像备份所在的目标分布式节点;根据所述任意活跃顶点的顶点信息和所述第一数据,向所述目标分布式节点发送第一更新消息。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述方法还包括:若接收到所述第一分布式节点和/或其他分布式节点发送的所述第一更新消息,则将所述第一更新消息中的顶点信息对应的活跃顶点确定为目标活跃顶点;根据所述第一分布式节点的外存中保存的所述出边集合,确定所述目标活跃顶点作为所述入点时所对应的至少一个目标出点;根据所述第一更新消息中的所述第一数据,对所述第一分布式节点的外存中所述目标出点的第二数据进行更新处理。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点,包括:若所述目标数据处理模式为所述拉动数据处理模式,则将所述第一分布式节点的外存中保存的所述入边集合中,每条入边对应的出入点确定待处理顶点;根据所述第一分布式节点的外存中保存的所述入边集合,确定每个所述待处理顶点作为出点时所对应的目标入点;确定所述任意活跃顶点中是否包括所述目标入点;若是,则将相应的所述待处理顶点确定为待更新顶点。
可选地,该存储介质存储的计算机可执行指令在被处理器执行时,所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第 一更新消息,包括:根据所述第一分布式节点的外存中的所述任意活跃顶点的第一数据,生成相应待更新顶点的临时更新数据;根据所述待更新顶点的顶点信息和所述临时更新数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息。
本说明书一实施例提供的存储介质存储的计算机可执行指令在被处理器执行时,确定目标图数据中当前参与数据处理的活跃顶点集合;若第一分布式节点的外存中保存有活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与活跃顶点集合相匹配的目标数据处理模式;根据目标数据处理模式,确定与该任意活跃顶点具有关联关系的待更新顶点;以及,根据外存中的该任意活跃顶点的第一数据,向待更新顶点所在的目标分布式节点发送第一更新消息,以使目标分布式节点根据第一更新消息对其外存中的待更新顶点的第二数据进行更新处理。由此,基于多个关联的目标事件的事件信息生成目标图数据,并基于目标图数据进行数据处理,不仅实现了大数据中数据的有效关联和关联数据的集中处理,避免数据遗漏,而且能够提升数据的处理效率。通过采用分布式数据处理系统,并在每个分布式节点的外存中保存目标图数据的相关数据,不仅实现了对内存的有效扩展,而且由于分布式节点可以并行的进行数据处理,因此能够满足大数据的处理需求且保障数据处理效率。此外,分布式数据处理系统支持多种数据处理模式,通过确定与当前的活跃顶点集合相匹配的目标数据处理模式,不仅提升了分布式数据处理系统的数据处理性能,而且进一步提升了数据处理效率。
需要说明的是,本说明书中关于存储介质的实施例与本说明书中关于分布式数据处理方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应的分布式数据处理方法的实施,重复之处不再赘述。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现, 或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书实施例时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本说明书一实施例可提供为方法、系统或计算机程序产品。因此,本说明书一实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书一实施例可在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可在分布式计算环境中实践本说明书的一实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本文件的实施例而已,并不用于限制本文件。对于本领域技术人员来说,本文件可以有各种更改和变化。凡在本文件的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本文件的权利要求范围之内。

Claims (17)

  1. 一种分布式数据处理方法,包括:
    确定目标图数据中当前参与数据处理的活跃顶点集合;其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素;所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;
    若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;
    根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;
    根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
  2. 根据权利要求1所述的方法,所述确定目标图数据中当前参与数据处理的活跃顶点集合之前,还包括:
    接收指定设备发送的分片数据和目标图数据的属性信息,所述分片数据由所述指定设备按照预设的数据划分方式对所述目标图数据进行划分处理所得;
    将所述分片数据和所述属性信息保存至所述第一分布式节点的外存中;
    或者,
    若确定所述第一分布式节点具有预处理权限,则根据预设的数据划分方式对所述目标图数据进行划分处理,得到待分配给所在分布式系统中的每个分布式节点的分片数据;
    将所述分片数据和所述目标图数据的属性信息发送给所述分布式系统中的每个分布式节点,以使所述分布式节点将所述分片数据和所述属性信息保存至外存中。
  3. 根据权利要求2所述的方法,所述顶点包括入点和出点,将所述目标图数据中的每条边确定为有向边,所述有向边由所述入点指向所述出点;所述有向边是所述入点的出边,所述有向边是所述出点的入边;
    所述分片数据包括划分的顶点子集、所述顶点子集中各顶点的入边所对应的入边集合、所述顶点子集中各顶点的出边所对应的出边集合、所述顶点子集中每个顶点的主备份、与所述顶点子集中每个顶点构成所述有向边的顶点的镜像备份;其中,所述主备份包括相应顶点所对应的事件要素的要素数据,所述镜像备份用于传递消息;
    所述属性信息包括所述目标图数据的边的第一数量、所述目标图数据中每个顶点的出边的第二数量。
  4. 根据权利要求3所述的方法,所述确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式,包括:
    根据预设的计算方式计算所述活跃顶点集合的稠密度;
    根据所述稠密度,确定预设的推动数据处理模式和拉动数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式。
  5. 根据权利要求4所述的方法,所述根据预设的计算方式计算所述活跃顶点集合的稠密度,包括:
    统计所述活跃顶点集合中活跃顶点的第三数量;
    根据所述第二数量统计所述活跃顶点集合中各活跃顶点的出边的总数量,将所述总数量确定为第四数量;
    根据预设的计算方式,基于所述第三数量和所述第四数量计算所述活跃顶点集合的稠密度;
    所述根据所述稠密度,确定预设的推动数据处理模式和拉动数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式,包括:
    根据所述第一数量确定比对稠密度,确定所述活跃顶点集合的稠密度是否不小于所述比对稠密度;
    若是,则将所述拉动数据处理模式确定为所述目标数据处理模式;
    若否,则将所述推动数据处理模式确定为所述目标数据处理模式。
  6. 根据权利要求4所述的方法,所述根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点,包括:
    若确定所述目标数据处理模式为所述推动数据处理模式,则根据所述第一分布式节点的外存中保存的所述出边集合和所述镜像备份,确定所述任意活跃顶点作为所述入点时所对应的目标出点;
    将所述目标出点确定为所述待更新顶点;
    所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,包括:
    从所述第一分布式节点的外存中获取所述任意活跃顶点的第一数据;
    确定所述待更新顶点和所述待更新顶点的镜像备份所在的目标分布式节点;
    根据所述任意活跃顶点的顶点信息和所述第一数据,向所述目标分布式节点发送第一更新消息。
  7. 根据权利要求6所述的方法,所述方法还包括:
    若接收到所述第一分布式节点和/或其他分布式节点发送的所述第一更新消息,则将所述第一更新消息中的顶点信息对应的活跃顶点确定为目标活跃顶点;
    根据所述第一分布式节点的外存中保存的所述出边集合,确定所述目标活跃顶点作为所述入点时所对应的至少一个目标出点;
    根据所述第一更新消息中的所述第一数据,对所述第一分布式节点的外存中所述目标出点的第二数据进行更新处理。
  8. 根据权利要求7所述的方法,所述根据所述第一更新消息中的所述第一数据,对所述第一分布式节点的外存中所述目标出点的第二数据进行更新处理,包括:
    确定每个所述目标出点对应的目标线程;
    将所述第一更新消息发送给对应的所述目标线程,以使所述目标线程根据所述第一更新消息中的所述第一数据,对所述第一分布式节点的外存中相应目标出点的第二数据进行更新处理。
  9. 根据权利要求8所述的方法,所述将所述第一更新消息发送给对应的所述目标线程,以使所述目标线程根据所述第一更新消息中的所述第一数据,对所述第一分布式节点的外存中相应目标出点的第二数据进行更新处理,包括:
    确定所述目标出点的第二数据是否处于加锁状态;
    若是,则将所述第一更新消息保存至相应目标出点的消息队列中,以使所述目标出点对应的目标线程在对所述第二数据进行解锁处理后,从所述消息队列中获取所述第一更新消息;并对所述第二数据进行加锁处理后,根据所述第一更新消息中的所述第一数据,对所述第二数据进行更新处理;
    若否,则将所述第一更新消息发送给对应的所述目标线程,以使所述目标线程对所述目标出点的第二数据进行加锁处理后,根据所述第一更新消息中的所述第一数据对所述目标出点的第二数据进行更新处理,并在更新完成后对所述第二数据进行解锁处理。
  10. 根据权利要求4所述的方法,所述根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点,包括:
    若所述目标数据处理模式为所述拉动数据处理模式,则将所述第一分布式节点的外存中保存的所述入边集合中,每条入边对应的出入点确定待处理顶点;
    根据所述第一分布式节点的外存中保存的所述入边集合,确定每个所述待处理顶点作为出点时所对应的目标入点;
    确定所述任意活跃顶点中是否包括所述目标入点;
    若是,则将相应的所述待处理顶点确定为待更新顶点。
  11. 根据权利要求10所述的方法,所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,包括:
    根据所述第一分布式节点的外存中的所述任意活跃顶点的第一数据,生成相应待更新顶点的临时更新数据;
    根据所述待更新顶点的顶点信息和所述临时更新数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息。
  12. 根据权利要求11所述的方法,所述方法还包括:
    若接收到所述第一分布式节点和/或其他分布式节点发送的所述第一更新消息,则根据所述第一更新消息中的顶点信息确定所述待更新顶点;
    根据所述第一更新消息中的临时更新数据,对所述第一分布式节点的外存中所述待更新顶点的第二数据进行更新处理。
  13. 根据权利要求1所述的方法,所述根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点,包括:
    调用第一函数,基于所述第一函数根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;
    所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理,包括:
    基于所述第一函数根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点调用第二函数,并基于所述第二函数根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
  14. 根据权利要求1所述的方法,所述根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息之后,还包括:
    若接收到所述第一更新消息,则在所述更新处理完成后,确定是否满足预设的迭代停止条件;
    若否,则将所述待更新顶点确定为新的活跃顶点,根据所述新的活跃顶点的顶点信息,向所在分布式系统中的每个分布式节点发送第二更新消息;以及,
    接收所述目标分布式节点发送的所述第二更新消息,将接收到的所述第二更新消息所对应的所述新的活跃顶点,确定为所述目标图数据中当前参与数据处理的活跃顶点集合。
  15. 一种分布式数据处理装置,包括:
    第一确定模块,确定目标图数据中当前参与数据处理的活跃顶点集合;其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素;所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;
    第二确定模块,若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;
    第三确定模块,根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;
    发送模块,根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
  16. 一种分布式数据处理设备,包括:
    处理器;以及,
    被安排成存储计算机可执行指令的存储器,所述计算机可执行指令在被执行时使所述处理器:
    确定目标图数据中当前参与数据处理的活跃顶点集合;其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素;所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;
    若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;
    根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;
    根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
  17. 一种存储介质,用于存储计算机可执行指令,所述计算机可执行指令在被处理器执行时实现以下流程:
    确定目标图数据中当前参与数据处理的活跃顶点集合;其中,所述目标图数据是预先基于多个关联的目标事件的事件信息生成,所述事件信息包括相应目标事件的多个事件要素;所述目标图数据的每个顶点对应一个所述事件要素,所述目标图数据的每条边连接具有关联关系的所述顶点;
    若第一分布式节点的外存中保存有所述活跃顶点集合中的任意活跃顶点,则确定预设的多个数据处理模式中与所述活跃顶点集合相匹配的目标数据处理模式;
    根据所述目标数据处理模式,确定与所述任意活跃顶点具有所述关联关系的待更新顶点;
    根据所述外存中的所述任意活跃顶点的第一数据,向所述待更新顶点所在的目标分布式节点发送第一更新消息,以使所述目标分布式节点根据所述第一更新消息对其外存中的所述待更新顶点的第二数据进行更新处理。
PCT/CN2022/125675 2021-10-20 2022-10-17 分布式数据处理 WO2023066198A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/544,666 US20240134881A1 (en) 2021-10-20 2023-12-19 Distributed data processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111218593.5A CN113656426B (zh) 2021-10-20 2021-10-20 分布式数据处理方法、装置及设备
CN202111218593.5 2021-10-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/544,666 Continuation US20240134881A1 (en) 2021-10-20 2023-12-19 Distributed data processing

Publications (1)

Publication Number Publication Date
WO2023066198A1 true WO2023066198A1 (zh) 2023-04-27

Family

ID=78484290

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125675 WO2023066198A1 (zh) 2021-10-20 2022-10-17 分布式数据处理

Country Status (3)

Country Link
US (1) US20240134881A1 (zh)
CN (2) CN113656426B (zh)
WO (1) WO2023066198A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113656426B (zh) * 2021-10-20 2022-02-08 支付宝(杭州)信息技术有限公司 分布式数据处理方法、装置及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224528A (zh) * 2014-05-27 2016-01-06 华为技术有限公司 基于图计算的大数据处理方法和装置
CN106815080A (zh) * 2017-01-09 2017-06-09 北京航空航天大学 分布式图数据处理方法和装置
CN108132838A (zh) * 2016-11-30 2018-06-08 华为技术有限公司 一种图数据处理的方法、装置及系统
CN110442754A (zh) * 2019-08-05 2019-11-12 腾讯科技(深圳)有限公司 标签更新方法及装置、分布式存储系统
CN111737540A (zh) * 2020-05-27 2020-10-02 中国科学院计算技术研究所 一种应用于分布式计算节点集群的图数据处理方法和介质
CN113656426A (zh) * 2021-10-20 2021-11-16 支付宝(杭州)信息技术有限公司 分布式数据处理方法、装置及设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9495477B1 (en) * 2011-04-20 2016-11-15 Google Inc. Data storage in a graph processing system
CA2790479C (en) * 2012-09-24 2020-12-15 Ibm Canada Limited - Ibm Canada Limitee Partitioning a search space for distributed crawling
CN103914556A (zh) * 2014-04-15 2014-07-09 西北工业大学 大规模图数据处理方法
US10795672B2 (en) * 2018-10-31 2020-10-06 Oracle International Corporation Automatic generation of multi-source breadth-first search from high-level graph language for distributed graph processing systems
US10936659B2 (en) * 2019-01-02 2021-03-02 International Business Machines Corporation Parallel graph events processing
CN110737804B (zh) * 2019-09-20 2022-04-22 华中科技大学 一种基于活跃度布局的图处理访存优化方法及系统
CN113065035A (zh) * 2021-03-29 2021-07-02 武汉大学 一种单机核外属性图计算方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224528A (zh) * 2014-05-27 2016-01-06 华为技术有限公司 基于图计算的大数据处理方法和装置
CN108132838A (zh) * 2016-11-30 2018-06-08 华为技术有限公司 一种图数据处理的方法、装置及系统
CN106815080A (zh) * 2017-01-09 2017-06-09 北京航空航天大学 分布式图数据处理方法和装置
CN110442754A (zh) * 2019-08-05 2019-11-12 腾讯科技(深圳)有限公司 标签更新方法及装置、分布式存储系统
CN111737540A (zh) * 2020-05-27 2020-10-02 中国科学院计算技术研究所 一种应用于分布式计算节点集群的图数据处理方法和介质
CN113656426A (zh) * 2021-10-20 2021-11-16 支付宝(杭州)信息技术有限公司 分布式数据处理方法、装置及设备

Also Published As

Publication number Publication date
CN114637756A (zh) 2022-06-17
CN113656426B (zh) 2022-02-08
US20240134881A1 (en) 2024-04-25
CN113656426A (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
US10171284B2 (en) Reachability-based coordination for cyclic dataflow
US10614117B2 (en) Sharing container images between mulitple hosts through container orchestration
WO2018177235A1 (zh) 一种区块链共识方法及装置
EP3072070B1 (en) Callpath finder
TWI694700B (zh) 資料處理方法和裝置、用戶端
Martha et al. h-MapReduce: a framework for workload balancing in MapReduce
US20240020886A1 (en) Updates on context modeling of occupancy coding for point cloud coding
US11574254B2 (en) Adaptive asynchronous federated learning
WO2020143410A1 (zh) 数据存储方法及装置、电子设备、存储介质
US20180300330A1 (en) Proactive spilling of probe records in hybrid hash join
US11563805B2 (en) Method, apparatus, client terminal, and server for data processing
US11438628B2 (en) Hash-based accessing of geometry occupancy information for point cloud coding
WO2023066198A1 (zh) 分布式数据处理
US20200082026A1 (en) Graph data processing
CN110955720B (zh) 一种数据加载方法、装置及系统
WO2024046015A1 (zh) 一种数据查询方法、装置、存储介质及电子设备
CN110311826B (zh) 网络设备配置方法及装置
CN112307272B (zh) 确定对象之间关系信息的方法、装置、计算设备及存储介质
EP3916540A1 (en) Compiling monoglot function compositions into a single entity
US11106488B2 (en) Blockchain read/write data processing method, apparatus, and server
EP2696298B1 (en) Information processing device
US20230168884A1 (en) Extracting entity relationship diagrams from source code
Hussain et al. A novel approach of fair scheduling to enhance performance of hadoop distributed file system
CN114268540A (zh) 规则引擎的优化方法、装置及设备
WO2023275250A1 (fr) Systeme et procede de planification de traitement de programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22882804

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE