CN104780213A - Load dynamic optimization method for principal and subordinate distributed graph manipulation system - Google Patents

Load dynamic optimization method for principal and subordinate distributed graph manipulation system Download PDF

Info

Publication number
CN104780213A
CN104780213A CN201510181554.0A CN201510181554A CN104780213A CN 104780213 A CN104780213 A CN 104780213A CN 201510181554 A CN201510181554 A CN 201510181554A CN 104780213 A CN104780213 A CN 104780213A
Authority
CN
China
Prior art keywords
load
node
summit
computing
load capacity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510181554.0A
Other languages
Chinese (zh)
Other versions
CN104780213B (en
Inventor
谢夏
金海�
徐曼娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201510181554.0A priority Critical patent/CN104780213B/en
Publication of CN104780213A publication Critical patent/CN104780213A/en
Application granted granted Critical
Publication of CN104780213B publication Critical patent/CN104780213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Multi Processors (AREA)

Abstract

The invention discloses a load dynamic optimization method for a principal and subordinate distributed graph manipulation system. The method comprises the following steps: a dynamic re-dividing and controlling step for main calculating nodes, a load monitoring step in working calculating nodes, and a load transfer step. The method disclosed by the invention is independent of the initial dividing of graph data. During the iteration execution of the working nodes, dynamic re-dividing is performed according to the instruction execution of the main nodes so as to perform load balancing; in the load monitoring step, the loads of the nodes are calculated, and before the end of the execution of each iteration, the loads are transmitted to other calculating nodes; besides, for the load transferring step, at the beginning of the execution of each iteration, the situation whether the nodes are overloaded or not is judged according to the load information which is received and monitored by other nodes, and the transferring nodes and the transferring amount are determined; when the iteration is completely executed, the load data is transferred to the target node, so that the dynamic load balancing of a distributed graph manipulation system is realized. Through the implementation of the dynamic optimization method disclosed by the invention, the problem that in the distributed graph manipulation system, the load is imbalanced can be effectively solved.

Description

A kind of master-salve distributed figure treatment system load dynamic optimization method
Technical field
The invention belongs to distributed diagram data process field, more specifically, relate to the distributed figure treatment system based on BSP model realization.
Background technology
Figure is a class abstract data structure the most frequently used in computer science, and relative to traditional relation data and XML data storehouse, the ability to express of figure is abundanter, and therefore, the application relevant to figure is almost ubiquitous.Along with the arrival of large data age, the scale of figure increases day by day, in cloud computing environment, carry out distributed treatment to figure, has become new research tendency.Therefore also having a large amount of distributed figure treatment systems, is all mainly the class Pregel system based on BSP model realization.BSP computation model is synchronous computation model, can carry out multiple iterative cycles execution, and an iteration comprises calculating, communication and synchronous three steps.BSP model is applicable to the successive ignition characteristic that distributed figure calculates very much, and therefore, Google is the distributed figure transaction module Pregel of inner use according to BSP model development.Pregel have employed the method centered by summit, and namely summit participates in calculating, and summit is divided in the process of implementation enlivens state and inactive state.Limit in figure does not participate in only calculating for pass-along message.A nomography execution just can be able to be completed by successive ignition.If have received message for the summit of inactive state, then can be activated.Pregel uses host-guest architecture to carry out distributed treatment simultaneously, and host node is responsible for coordinating each computing node and is carried out work, and computing node is primary responsibility figure task computation then.
Diagram root is extremely important step when carrying out figure process in distributed figure treatment system, and effective diagram root strategy can greatly improve the treatment effeciency of figure.Existing diagram root strategy is be loaded into before computing node at diagram data mostly, the principle according to diagram root: the low connectedness between the balanced and subgraph of subgraph, carry out an initial division, for this kind of diagram root strategy, we are called that static map divides.But when carrying out distributed figure process after dividing diagram data, according to the difference of the nomography (i.e. graphic operation) performed, the iteration feature of figure also different (namely there is load imbalance in each computing node).This is because different nomographys does not need all vertex datas in figure to process in each iterative process.Therefore cause different nomographys to have different load behaviors when performing, thus produce load imbalance when running.But the diagram root algorithm of static state is difficult to performing the load Behavioral change of initial stage prognostic chart, load imbalance when therefore once static diagram root can not solve the operation that algorithms of different causes.
Summary of the invention
For load imbalance problem during operation set forth above, the invention provides a kind of load dynamic optimization method being applicable to distributed figure process scene.First, load when performing figure is monitored.According to the monitored results of each computing node, determine according to overall average load the node that overloads, a part of load is transferred to from overload node the node that do not overload, and this process shifts also referred to as load.Because dynamic division again itself can cause certain calculating and communication overhead, so need to control dynamic division again itself, therefore also need on the primary node and dynamically divide rate-determining steps again.This invention effectively can solve the load imbalance problem caused by nomography, makes up the deficiency of static division.
Load dynamic optimization method provided by the invention, what comprise host node in distributed figure treatment system dynamically divides rate-determining steps again, and load monitoring in working node and load transferring step.Dynamically divide the rate-determining steps mainly execution that dynamically divides again of adaptive control and end again, to reduce the expense that dynamic division causes itself.Load monitoring step and load transferring step are positioned at working node, and both complement each other, and are the chief components dynamically divided.Here host node is all identical with the physical machine performance configuration residing for working node.
Described load monitoring step is for the loading condition of each iterative process during monitoring distributed figure process.The load of working node by enlivening vertex set in an iteration and enlivening limit collection to determine, wherein, movable top is counted (i.e. the length of movable top point set) is the number of times needing to call summit computing function in working node, enlivening limit number (namely enlivening the length of limit collection) is that in working node, summit computing function needs message count to be processed, and concrete formula (1) is as follows:
W i = | AV i | + | AE i | , ∀ i ∈ { 1 . . K } Formula (1)
Wherein, i is any operative node, AV i; For enlivening vertex set, AE ifor enlivening limit collection, K is computing node number.The load monitored all is sent to other all nodes by each working node, needs the load capacity shifted for next iteration working node after determining self whether to overload and overload.Note, the reason load not being sent to host node to carry out calculating here is, the present invention is directed to is all figure treatment system based on BSP model, and the transmission of message each time only has and enters into next iteration after synchronization and could arrive corresponding message destination.If host node carries out the determination of overload judgement and the node that diverts the aim, now, working node can not only carry out any calculating and wait for that host node is entering next iteration after having calculated, and could receive the object information that host node sends.This will cause the waste of even more serious computational resource, so these calculating are all carried out by we in each working node.
Described load transferring step be used for from overload node shift a part of load summit to the node that do not overload to reach operation time load balancing.When each iteration performs beginning, each computing node can receive the load monitoring result that all computing nodes send.Each computing node determines overall average load according to the loading condition of all computing nodes received, and judges whether self is overload node.Here simply can not arrange overall average load is overload standard, because when each computing node is more or less the same almost equilibrium, also the load capacity having computing node is greater than the situation of average load, so only have load capacity to be greater than the certain interval situation of mean value just can confirm as overload, preferential interval is above average load (110%-130%), value is too small may be used as balanced node as overload node by mistake, and value is excessive may miss some overload nodes.If overload node, then need to determine goal displacement node to the load sequence of all computing nodes.Such as, total total K working node, and each working node sorts from big to small according to its load, the node of overload is certainly near gauge outfit, position is i (1<i<=K), then the corresponding load node location that diverts the aim is K-i+1.The load transfer amount from overload node (i) to destination node (j) is determined according to formula (2).
Q i , j = w i - w j 2 Formula (2)
The present invention can determine the summit needing transfer according to the Contribution Function enlivening summit in computing node, Contribution Function definition is as formula (3).
JO i, j(u)=J i, j(u)+O i, j(u) formula (3)
Wherein, u enlivens summit, J in computing node (i) i, j(u) and O i, j(u) message count that to be summit u respectively receive from computing node (j) and send to the message count calculating summit (j).Here, further can optimize Contribution Function.Such as, when iterations is s, the contribution margin that u stays computing node (i) is and the contribution margin transferring to destination node (j) is now shift the benefit brought of summit u and not obvious, this situation, I can consider not shift summit u.Therefore, we can arrange a ratio λ, only have as the contribution margin JO after transfer i, j(u) and the contribution margin JO do not shifted i, iu the ratio of () is greater than λ, namely we just consider to shift summit u, and namely now contribution margin is former JO i, ju (), as being less than λ, then contribution margin is 0.It should be noted that λ >1 here, preferential interval is (1.2-1.4), and value is crossed conference and caused transferable number of vertex less, and the too small meeting of value causes communication overhead to increase.By above-mentioned Contribution Function to enlivening summit sequence from big to small in computing node, obtain the transfer rank enlivening summit .When transferring load summit, shift choosing the summit with maximum contribution value from rank top.Each load capacity reduced on transfer summit, summit of enlivening is calculated by formula (4),
W i(u)=1+|AE i, j(u) | formula (4)
Wherein, | AE i, j(u) | represent with enliven that summit u is connected enliven limit collection length.
It is described that dynamically dividing rate-determining steps is again arranged in host node.Host node is the Controlling vertex of distributed figure calculation task, does not participate in calculating.Dynamically dividing again in rate-determining steps, according to the load information of each computing node and transfer number of vertex, can judge whether load reaches Optimal State, if then terminate dynamically to divide again.
The invention provides a kind of load optimized method, for carrying out load balance optimization in distributed figure process scene, the execution that in described method, each step is overall comprises the following steps:
(1) initialization step: will the diagram data carrying out calculating be needed to upload to distributed figure treatment system, determines that distributed figure treatment system needs the figure calculating operation performed; Distributed figure treatment system carries out diagram root to the diagram data be loaded into, and diagram data is loaded into each computing node after being divided into multiple sub-graph data respectively;
(2) initial load calculation procedure: each computing node calculates the load capacity of this section dot map data; In stage of communication, load capacity is sent to other all computing nodes and host node; Described load capacity equals this node and enlivens number of vertex and add and enliven limit number;
(3) load capacity discriminating step: each computing node, according to other computing node load capacity received, judges that whether self overloads, and is, calculates this node load and go to step before stage of communication (4); Otherwise continue to calculate this node load capacity, in stage of communication, load capacity is sent to other all computing nodes, go to step (5);
(4) load transferring step;
(5) dynamic partiting step again: host node calculates the number of vertex that all computing nodes shift in upper once iteration, (threshold value here refers to the critical value of the income after dynamically dividing again and its expense to judge transfer number of vertex whether to be less than default transfer threshold value, preferential interval is (0.5%-2%), value is crossed the too early end of conference and is dynamically divided again and reduce load optimized effect, the too small meeting of value causes expense excessive because long-play dynamically divides again): be that then host node sends the instruction dynamically dividing and terminate to each computing node, each computing node receives the dynamic division halt instruction of host node, stop performing load capacity to calculate and load transfer, otherwise send and continue load optimized instruction, go to step (3).
Compared with prior art, the present invention has following beneficial effect:
(1) rely on less to graph data structure and figure calculating operation: original static division depends on graph data structure and corresponding graphic operation to a great extent, and it is all load balancing that once initial static division is difficult to reach all graphic operations in whole iterative process.The dynamic partition strategy again that the present invention adopts does not rely on initial division, has good balanced loaded effect for different graph data structure and graphic operation.
(2) load optimized effective: by monitoring the loading condition of each computing node in real time, location overload node also shifts the large summit of contribution factor to the node that do not overload, dynamically divide the load imbalance improving greatly and caused by different nomography behavior again, decrease the disposed of in its entirety time that distributed figure calculates, load balance optimization successful.Best situation, can reduce 50% computing time (the balanced significantly algorithm of SSSP, BSP even load), can reduce the computing time of 10%-30% for ordinary circumstance.
(3) subsidiary expense is little: perform owing to being dynamically divided in distributed figure computational process, the monitoring of load and transfer can bring certain calculating and communication overhead again.Therefore, present invention employs and dynamically divide rate-determining steps again, monitor transfer number of vertex and the loading condition of each computing node in the master node, when load reaches balanced convergence, stop dynamically dividing again.In addition, other partition strategy is again different from some, and the present invention can't additionally increase iterations and shift for summit, and load monitoring and transfer all perform in same iterative computation, thus reduce and dynamically divide the subsidiary expense caused again.
Accompanying drawing explanation
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is overview flow chart of the present invention;
Fig. 2 is the detailed operational flow diagrams of bright load transfer;
Fig. 3 is the dynamic partiting step workflow diagram again of host node.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
It is as follows that the load dynamic optimization method based on BSP model master-salve distributed figure treatment system shown in Fig. 1 performs step:
(1) initialization step: will the diagram data carrying out calculating be needed to upload to distributed figure treatment system, determines that distributed figure treatment system needs the figure calculating operation performed; Distributed figure treatment system carries out Hash division to the diagram data be loaded into, and diagram data is loaded into each computing node after being divided into multiple sub-graph data respectively;
(2) initial load calculation procedure: each computing node calculates the load capacity of this section dot map data; In stage of communication, load capacity is sent to other all computing nodes and host node; Described load capacity equals this node and enlivens number of vertex and add and enliven limit number;
(3) load capacity discriminating step: each computing node, according to other computing node load capacity received, calculates average load in order to judge whether self overloads.If the load capacity of computing node is greater than 110% of average load, is then defined as overload, goes to step before stage of communication (4) after calculating this node load; Otherwise continue to calculate this node load capacity, in stage of communication, load capacity is sent to other all computing nodes, go to step (5);
(4) load transferring step;
(5) dynamic partiting step again: host node calculates the number of vertex that all computing nodes shift in upper once iteration, judge whether shift number of vertex is less than default transfer threshold value: be that then host node sends the instruction dynamically dividing and terminate to each computing node, each computing node receives the dynamic division halt instruction of host node, stops performing load capacity and calculates and load transfer; Otherwise send and continue load optimized instruction, go to step (3);
As shown in Figure 2, in Fig. 1, in step (4), load transferring step detailed operation process step is as follows:
(4-1) load of all computing nodes is sorted, determine goal displacement node.
(4-2) according to the load capacity of goal displacement computing node, transferring load amount Q is calculated i, j, and will assignment is to temporary variable Temp.
(4-3) according to Contribution Function value JO i, ju () carries out descending sort (in the present embodiment, the λ in Contribution Function formula is set to 1.2) to summits of enlivening all in computing node i, obtain shifting list .
(4-4) judge Temp be greater than 0 or the transfer list summit of enlivening that also has contribution margin to be greater than 0 whether be true, if the step of entering (4-5), otherwise load transfer terminates.
(4-5) from transfer list what middle taking-up rank was the highest enlivens summit u.
(4-6) the load capacity W reduced after u transfer iu whether () be less than or equal to Temp, if the step of entering (4-7), otherwise enters step (4-4).
(4-7) u is transferred to target computing nodes j from computing node i.
(4-8) the load capacity W will reduced after Temp-u transfer iu () assignment, to Temp, enters step (4-4).
Fig. 3 is the workflow diagram (containing dynamically dividing rate-determining steps again) of a load optimized method host node end iteration of the embodiment of the present invention, as shown in Figure 1, in the load optimized method step of the present invention (5), host node end iteration workflow comprises the following steps:
(5-1) the transfer number of vertex that each computing node of last iteration that process receives sends and load information.
(5-2) calculate the summit sum that all computing nodes shift in upper once iteration, and compare with threshold limit value (present embodiment is set to 1% of figure summit total quantity).
(5-3) judge whether the transfer number of vertex of all nodes is less than threshold limit value, if the step of entering (5-4), otherwise enter step (5-5).
(5-4) host node sends and stops dynamically dividing instruction again to each computing node.
(5-5) each computing node carries out iterative computation, until all computing nodes enter synchronous regime, current iteration terminates.
Those skilled in the art will readily understand; the foregoing is only preferred embodiment of the present invention; not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (4)

1., based on a master-salve distributed figure treatment system load dynamic optimization method for BSP model, it is characterized in that comprising the steps:
(1) initialization step: will the diagram data carrying out calculating be needed to upload to distributed figure treatment system, determines that distributed figure treatment system needs the figure calculating operation performed; Distributed figure treatment system carries out diagram root to the diagram data be loaded into, and diagram data is loaded into each computing node after being divided into multiple sub-graph data respectively;
(2) initial load calculation procedure: each computing node calculates the load capacity of this section dot map data; In stage of communication, load capacity is sent to other all computing nodes and host node; Described load capacity equals this node and enlivens number of vertex and add and enliven limit number;
(3) load capacity discriminating step: each computing node, according to other computing node load capacity received, judges that whether self overloads, and is, calculates this node load and go to step before stage of communication (4); Otherwise continue to calculate this node load capacity, in stage of communication, load capacity is sent to other all computing nodes, go to step (5);
(4) load transferring step: comprise following sub-step:
(4.1) select target transfering node: the load capacity of all computing nodes is sorted, generate load sequencing table, its position in sequencing table is determined according to each node load amount, load capacity is more just the closer to gauge outfit, if the sequence number of node in sequencing table is i, then the goal displacement node ID of its correspondence is j, j=K-i+1, and wherein K is distributed figure treatment system computing node sum;
(4.2) transferring load amount Q is calculated i, j,
Q i , j = W i - W j 2
Wherein W i, W jrepresent computing node i respectively, the load capacity of the front calculating operation of j, i is that node is produced in load, and j is goal displacement node, by Q i, jassignment is to temporary variable T;
(4.3) all contribution margins enlivening summit u in computing node
Wherein, with summit u receives from goal displacement node j when iteration message count and the message count sending to goal displacement node j respectively;
When time, then the contribution margin of summit u is just original otherwise being composed by the contribution margin of summit u is 0; Wherein the contribution margin after summit u shifts, be the contribution margin that summit u does not shift, λ is default transfer gate limit value, and span is (1.2-1.4);
Sorted from big to small in summits of enlivening all in computing node, obtain shifting sequencing table
(4.4) judge whether that T is less than or equal to 0 and transfer sequencing table in do not have contribution margin to be greater than the element of 0, if then transfer number of vertex and the load information of this node are sent to host node in stage of communication, load transfer terminates, and carries out step (5); Otherwise carry out sub-step (4.5);
(4.5) from transfer sequencing table what middle taking-up rank was the highest enlivens summit u; The load capacity W reduced after calculating its transfer i(u), W i(u)=1+|AE i, j(u) |, wherein, | AE i, j(u) | represent with enliven that summit u is connected enliven limit collection size;
(4.6) W is judged iu whether () be less than or equal to T, then carry out sub-step (4.7), otherwise carry out sub-step (4.4);
(4.7) target computing nodes j is transferred to, by T-W by enlivening summit u from computing node i iu () value gives T, enter step (4.4);
(5) dynamic partiting step again: host node calculates the number of vertex that all computing nodes shift in upper once iteration, judge whether shift number of vertex is less than default transfer threshold value: be that then host node sends the instruction dynamically dividing and terminate to each computing node, each computing node receives the dynamic division halt instruction of host node, stops performing load capacity and calculates and load transfer; Otherwise send and continue load optimized instruction, go to step (3).
2. method according to claim 1, in it is characterized in that described step (3) load capacity differentiates, computing node judges whether self overloads and comprises following sub-step:
(1) according to each computing node load information received, computing system average load: all computing node load capacity of system average load=Σ/computing node sum;
(2) this node load capacity compared with system average load, load capacity exceedes (110%-130%) of average load and is overload above; Otherwise be non-overloading.
3. method according to claim 1, is characterized in that in sub-step (4.3), if summit u had been transferred before, then being composed by its contribution margin was 0.
4. method according to claim 1, is characterized in that the transfer of summit described in step (6) number threshold value is (0.5%-2%) of figure summit sum.
CN201510181554.0A 2015-04-17 2015-04-17 A kind of master-salve distributed figure processing system load dynamic optimization method Active CN104780213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510181554.0A CN104780213B (en) 2015-04-17 2015-04-17 A kind of master-salve distributed figure processing system load dynamic optimization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510181554.0A CN104780213B (en) 2015-04-17 2015-04-17 A kind of master-salve distributed figure processing system load dynamic optimization method

Publications (2)

Publication Number Publication Date
CN104780213A true CN104780213A (en) 2015-07-15
CN104780213B CN104780213B (en) 2018-02-23

Family

ID=53621447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510181554.0A Active CN104780213B (en) 2015-04-17 2015-04-17 A kind of master-salve distributed figure processing system load dynamic optimization method

Country Status (1)

Country Link
CN (1) CN104780213B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653204A (en) * 2015-12-24 2016-06-08 华中科技大学 Distributed graph calculation method based on disk
CN107122248A (en) * 2017-05-02 2017-09-01 华中科技大学 A kind of distributed figure processing method of storage optimization
CN109104334A (en) * 2018-08-23 2018-12-28 郑州云海信息技术有限公司 The management method and device of monitoring system interior joint
CN109388733A (en) * 2018-08-13 2019-02-26 国网浙江省电力有限公司 A kind of optimization method towards diagram data processing engine
WO2020019314A1 (en) * 2018-07-27 2020-01-30 浙江天猫技术有限公司 Graph data storage method and system and electronic device
CN111309976A (en) * 2020-02-24 2020-06-19 北京工业大学 GraphX data caching method for convergence graph application
WO2020181455A1 (en) * 2019-03-11 2020-09-17 深圳大学 Geographically distributed graph processing method and system
CN111859027A (en) * 2019-04-24 2020-10-30 华为技术有限公司 Graph calculation method and device
CN113093682A (en) * 2021-04-09 2021-07-09 天津商业大学 Non-centralized recursive dynamic load balancing calculation framework
CN113326125A (en) * 2021-05-20 2021-08-31 清华大学 Large-scale distributed graph calculation end-to-end acceleration method and device
CN115587222A (en) * 2022-12-12 2023-01-10 阿里巴巴(中国)有限公司 Distributed graph calculation method, system and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207891A (en) * 2011-06-10 2011-10-05 浙江大学 Method for achieving dynamic partitioning and load balancing of data-partitioning distributed environment
CN104298564A (en) * 2014-10-15 2015-01-21 中国人民解放军国防科学技术大学 Dynamic equilibrium heterogeneous system loading computing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102207891A (en) * 2011-06-10 2011-10-05 浙江大学 Method for achieving dynamic partitioning and load balancing of data-partitioning distributed environment
CN104298564A (en) * 2014-10-15 2015-01-21 中国人民解放军国防科学技术大学 Dynamic equilibrium heterogeneous system loading computing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KIRK SCHLOEGEL,GEORGE KARYPIS,VIPIN KUMAR: "Load Balancing of Dynamic and Adaptive Mesh-Based Computations", 《IEEE》 *
KIRK SCHLOEGEL等: "Dynamic Repartitioning of Adaptively Refined Meshes", 《IEEE》 *
QINGFENG ZHUGE,EDWIN HSING-MEAN SHA,BIN XIAO等: "Efficient Variable Partitioning and Scheduling for DSP Processors With Multiple Memory Modules", 《IEEE》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653204B (en) * 2015-12-24 2018-12-07 华中科技大学 A kind of distributed figure calculation method based on disk
CN105653204A (en) * 2015-12-24 2016-06-08 华中科技大学 Distributed graph calculation method based on disk
CN107122248A (en) * 2017-05-02 2017-09-01 华中科技大学 A kind of distributed figure processing method of storage optimization
CN107122248B (en) * 2017-05-02 2020-01-21 华中科技大学 Storage optimization distributed graph processing method
WO2020019314A1 (en) * 2018-07-27 2020-01-30 浙江天猫技术有限公司 Graph data storage method and system and electronic device
CN109388733A (en) * 2018-08-13 2019-02-26 国网浙江省电力有限公司 A kind of optimization method towards diagram data processing engine
CN109388733B (en) * 2018-08-13 2022-01-07 国网浙江省电力有限公司 Optimization method for graph-oriented data processing engine
CN109104334B (en) * 2018-08-23 2021-04-02 郑州云海信息技术有限公司 Management method and device for nodes in monitoring system
CN109104334A (en) * 2018-08-23 2018-12-28 郑州云海信息技术有限公司 The management method and device of monitoring system interior joint
WO2020181455A1 (en) * 2019-03-11 2020-09-17 深圳大学 Geographically distributed graph processing method and system
CN111859027A (en) * 2019-04-24 2020-10-30 华为技术有限公司 Graph calculation method and device
CN111309976B (en) * 2020-02-24 2021-06-25 北京工业大学 GraphX data caching method for convergence graph application
CN111309976A (en) * 2020-02-24 2020-06-19 北京工业大学 GraphX data caching method for convergence graph application
CN113093682A (en) * 2021-04-09 2021-07-09 天津商业大学 Non-centralized recursive dynamic load balancing calculation framework
CN113326125A (en) * 2021-05-20 2021-08-31 清华大学 Large-scale distributed graph calculation end-to-end acceleration method and device
CN115587222A (en) * 2022-12-12 2023-01-10 阿里巴巴(中国)有限公司 Distributed graph calculation method, system and equipment

Also Published As

Publication number Publication date
CN104780213B (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN104780213A (en) Load dynamic optimization method for principal and subordinate distributed graph manipulation system
CN103401939B (en) Load balancing method adopting mixing scheduling strategy
CN108965014B (en) QoS-aware service chain backup method and system
CN108089918B (en) Graph computation load balancing method for heterogeneous server structure
CN103369042A (en) Data processing method and data processing device
CN103412794A (en) Dynamic dispatching distribution method for stream computing
CN104461673B (en) A kind of virtual machine (vm) migration determination method and device
CN105740199A (en) Time sequence power estimation device and method of network on chip
CN109408452A (en) Mimicry industry control processor and data processing method
US9749219B2 (en) Method of optimizing routing in a cluster comprising static communication links and computer program implementing that method
CN110221909A (en) A kind of Hadoop calculating task supposition execution method based on load estimation
CN103455375A (en) Load-monitoring-based hybrid scheduling method under Hadoop cloud platform
CN105262534A (en) Route method and route device applicable to satellite communication network
CN112433853A (en) Heterogeneous sensing data partitioning method for parallel application of supercomputer data
CN109587072A (en) Distributed system overall situation speed limiting system and method
CN108259195A (en) The determining method and system of the coverage of anomalous event
CN108123891A (en) The dynamic load balancing method realized in SDN network using distributed domain controller
WO2017016590A1 (en) Scheduling heterogenous processors
CN111858458A (en) Method, device, system, equipment and medium for adjusting interconnection channel
CN108090027A (en) Data analysing method and data-analyzing machine
Polezhaev et al. Network resource control system for HPC based on SDN
Mao et al. A fine-grained and dynamic MapReduce task scheduling scheme for the heterogeneous cloud environment
CN106899392B (en) Method for carrying out fault tolerance on instantaneous fault in EtherCAT message transmission process
CN106874215B (en) Serialized storage optimization method based on Spark operator
CN105224381A (en) A kind of method, Apparatus and system moving virtual machine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant