CN104166593A - Method for computing asynchronous and concurrent scheduling of multiple application functions - Google Patents

Method for computing asynchronous and concurrent scheduling of multiple application functions Download PDF

Info

Publication number
CN104166593A
CN104166593A CN201410401606.6A CN201410401606A CN104166593A CN 104166593 A CN104166593 A CN 104166593A CN 201410401606 A CN201410401606 A CN 201410401606A CN 104166593 A CN104166593 A CN 104166593A
Authority
CN
China
Prior art keywords
application function
calculation task
computing
calculation
computational tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410401606.6A
Other languages
Chinese (zh)
Inventor
王智
都政
刘建文
井革新
李健来
熊超超
徐颖俊
周志平
靳绍巍
罗文龙
陈远磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cloud Computing Center Co Ltd
NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN (SHENZHEN CLOUD COMPUTING CENTER)
Original Assignee
Shenzhen Cloud Computing Center Co Ltd
NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN (SHENZHEN CLOUD COMPUTING CENTER)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cloud Computing Center Co Ltd, NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN (SHENZHEN CLOUD COMPUTING CENTER) filed Critical Shenzhen Cloud Computing Center Co Ltd
Priority to CN201410401606.6A priority Critical patent/CN104166593A/en
Publication of CN104166593A publication Critical patent/CN104166593A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a method for computing asynchronous and concurrent scheduling of multiple application functions. In the method, the time consuming characteristics and the number of computing tasks of all the application functions and the scale and performance information of cluster nodes of a computer are integrated, appropriate computing operation scheduling granularities are set independently for all the application functions, so that computing operations of all the application functions are formed and added into a scheduling sequence of a distributed computing management platform, and therefore asynchronous and concurrent submitting of the computing tasks of all the application functions, uniform scheduling computing of the computing operations of the application functions and asynchronous recovery of computing results of the application functions are realized. The method for computing asynchronous and concurrent scheduling of the application functions has the advantages that asynchronization and concurrence of the computing tasks in the application functions are supported, idling computing nodes are reasonably utilized so as to execute the computing tasks and the computing efficiency of a whole system is improved.

Description

A kind of many application functions asynchronism and concurrency dispatching method that calculates
Technical field
The present invention relates to computing application field, relate in particular to a kind of many application functions asynchronism and concurrency dispatching method that calculates.
Background technology
In cloud computing field, particularly need to carry out a large amount of fields of calculating, computing technique is calculated management of computing platform by network struction, make full use of the parallel processing capability of many computing machines, in the computation period of regulation, calculate by static state, transient state, dynamic security stability analysis, realize real time monitoring, the analysis and control of electricity net safety stable.But, the asynchronism and concurrency that is only supported in the inner many calculation tasks of an application function that the current management of computing platform of having realized cloud computing has, what have only supports multiple application functions synchronously concurrent, but in a computation period, cannot ensure that the computing node having calculated can trigger new scheduling at once also having in the situation to be scheduled such as calculation task, cause the idle waste of computational resource, also extended the computation period of whole system.
Summary of the invention
The technical problem to be solved in the present invention is, synchronously concurrent for above-mentioned asynchronism and concurrency or multiple application function that is supported in the inner many calculation tasks of an application function, cannot rationally utilize computing node and cause the problem of the idle waste of computational resource, a kind of many application functions asynchronism and concurrency dispatching method that calculates is provided.
The technical solution adopted for the present invention to solve the technical problems is: a kind of many application functions asynchronism and concurrency dispatching method that calculates of structure, the management of computing platform that comprises management node and multiple computing nodes is provided, and the method comprises the following steps:
S1, described management node receive computational data, start the application function that meets entry condition; If the application function starting need to be submitted calculation task to management of computing platform, go to step S2; If not, start subsequent applications function;
S2, described management of computing platform receive the calculation task of submitting to, calculate the calculation task scheduling granularity T of application function described in each according to formula (1), (2) and (3) g, wherein, the number of the calculation task of the application function of submitting to is m, the expected time of each calculation task is CT i(1≤i≤m), have n computing node in described management of computing platform, each described computing node can move α simultaneously j(1≤j≤n) individual calculation task, the scheduling overhead time of each computational tasks is T e, λ 0for the threshold value of default management of computing dispatching platforms overhead time performance factor; Formula (1) has defined E computing time that makes this application function tminimum optimization aim function; Formula (2) has defined the calculation task scheduling granularity T of application function gbe more than or equal to the constraint of the minimum of computation task expected time of this application function; Formula (3) has defined the performance constraints of computing time and the scheduling overhead time of application function;
min ( E t ( CT i , T g , m , n , T e ) ) ≈ min ( [ Σ i = 1 m CT i / T g Σ j = 1 n α j ] ( T g + T e ) ) - - - ( 1 )
T g≥min{CT i} (2)
T g/T e≥λ 0 (3)
S3, described management of computing platform, according to the calculation task expected time of application function, form the calculation task sequence of each application function;
S4, the management of computing platform calculating priority level default according to application function, add the calculation task sequence of each application function in the calculation task scheduling queue of management of computing platform;
S5, calculation task scheduling queue is formed to the computational tasks that is assigned to corresponding described computing node according to expected time of the scheduling granularity of application function and calculation task, and described computational tasks is added in computing job scheduling queue, distribute to successively correspondence in management of computing platform and calculate in the computing node of idle condition;
S6, after computing node has calculated, to management node transmit result of calculation.After management node perception, reclaim and merge corresponding result of calculation, the computing node of loopback result of calculation is set to idle condition simultaneously, and trigger the idle scheduling of new computing node, if also have the calculation task not yet calculating, go to step S5; Until the scheduling of the calculation task of all application functions is complete; For the computational tasks that is recovered to result of calculation, judge application function under it whether the result of calculation of all calculation tasks all return, if all return, return to this application function the information of calculating, judge whether that according to the calculation process of system new application function meets entry condition simultaneously, if have, start the application function that meets entry condition, go to step S2; If do not meet the application function of entry condition, and after other application function that meets entry condition all calculated, this flow process is calculated and is finished.
In many application functions of calculating asynchronism and concurrency dispatching method of the present invention, described step S5 comprises following sub-step:
S51, in calculation task scheduling queue, find first not yet to calculate maybe to need the calculation task that recalculates, and the application function that records this calculation task identifies a, suppose that current application function a has generated J computational tasks, the calculation task number that each computational tasks comprises is N task(l) (1≤l≤J), is currently numbered c in computing nodes to be scheduled such as idle conditions, and newly-generated computational tasks numbering is designated as J+1;
S52, from calculation task scheduling queue, take out successively this application function a and not yet calculate and maybe need the calculation task that recalculates, if there is calculative calculation task k (1≤k≤m) in application function a, first judge whether formula (4) constraint condition meets, if meet constraint, this calculation task k is directly joined in this computational tasks J+1, carry out subsequent calculations task traversal; Otherwise, go to step S53 and carry out the constraint judgement of computational tasks expected time; If all calculation tasks of this application function a have all added computational tasks to, go to step S54 and carry out computational tasks and issue; Wherein, the calculation task quantity constraint condition that formula (4) has defined current calculation task k directly adds computational tasks J+1 to, the calculation task quantity that current computational tasks J+1 comprises should be less than moves calculation procedure number α on the computing node being assigned to c(1≤c≤n);
N task(J+1)+1≤α c (4)
S53, for the calculation task k of application function a, judge that it joins after computational tasks J+1, whether the expectation of computational tasks J+1 meets the scheduling granularity constraint of formula (5) computing time; If meet, this calculation task k is directly joined in computational tasks J+1, go to step S52 and continue other follow-up calculation task of traversal applications function a; If do not meet the constraint of formula (5), then judge in calculation task scheduling queue whether also have other calculation task that need to dispatch of this application function a, if exist, go to step S51 and proceed subsequent calculations task traversal; If do not exist, go to step S54; Wherein, formula (5) definition calculation task k joins the expected time constraint condition of computational tasks J+1, and the calculation task expected time summation that current computational tasks J+1 comprises should be less than described calculation task scheduling granularity T gα cdoubly:
Σ i = 1 i = N task ( J + 1 ) CT i + CT k ≤ α c × T g - - - ( 5 )
S54, from the queue of high-performance calculation management platform computational tasks by be numbered J+1 computational tasks take out scheduling calculate to computing node c.
In many application functions of calculating asynchronism and concurrency dispatching method of the present invention, in described step S4, described calculating priority level comprises operation trigger condition, running priority level, the operation resource consumption weights of described application function.
Implement a kind of many application functions asynchronism and concurrency dispatching method that calculates of the present invention, there is following beneficial effect: the asynchronism and concurrency that is supported in the inner many calculation tasks of multiple application functions, to carry out calculation task, improve the counting yield of whole system by the idle computing node of reasonable utilization.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the multiple application function asynchronism and concurrency scheduling of high-performance calculation management platform schematic diagram;
Fig. 2 is the process flow diagram of high-performance calculation asynchronism and concurrency dispatching method;
Fig. 3 is that application function calculation task forms computational tasks schematic diagram.
Embodiment
Understand for technical characterictic of the present invention, object and effect being had more clearly, now contrast accompanying drawing and describe the specific embodiment of the present invention in detail.
The object of the invention is to provide a kind of many application functions asynchronism and concurrency dispatching method that calculates, the present invention is simultaneously on the basis of the calculation task serial scheduling of the traditional application function of compatibility, support the concurrent management of computing platform of submitting to of calculation task while of multiple application functions to dispatch calculating, can make full use of the computational resource of computing node, effectively avoid because certain application function that computational tasks on a computing node is being calculated the situation that causes other computing node not used by other application function.Offer a kind of computational tasks granularity of application function method to set up simultaneously, the scheduling overhead time that can calculate consuming time, computing node quantity and management of computing platform according to the expectation of the scale of the calculation task of application function, single calculation task is the scheduling granularity that each application function is specified a computational tasks, shortening on the basis of whole system computation period, reduce as far as possible the overhead time of computing platform, improve the counting yield of whole system.
Fig. 1 is the multiple application function asynchronism and concurrency scheduling of high-performance calculation management platform schematic diagram, Fig. 1 is by the mode of schematic diagram, simple declaration i application function concurrent submission calculation task simultaneously, after the perception of management of computing platform, form the scheduling sequence of i application function, the exemplary flow of carrying out asynchronism and concurrency scheduling.
Fig. 2 is the process flow diagram of high-performance calculation asynchronism and concurrency dispatching method, and referring to Fig. 2, a kind of many application functions asynchronism and concurrency dispatching method that calculates provided by the invention comprises the following steps:
What in S1, Fig. 2, step 1 was described is that management of computing platform management node receives after computational data, according to the calculation process of system, starts and meets A application function of entry condition, be respectively application function 1, application function 2 ... until application function A; If the application function starting need to be submitted calculation task to management of computing platform, turn 2) carry out computational tasks preparation; Otherwise, turn 1) and continuation startup subsequent applications function;
What in S2, Fig. 2, step 2 was described is that each application function is submitted calculation task information to management of computing platform, and the calculation task number of supposing a application function (1≤a≤A) is m, and the expected time of each calculation task is CT i(1≤i≤m), suppose in system and have n computing node, each computing node j can move α simultaneously j(1≤j≤n) individual calculation task (being calculation procedure number), the scheduling overhead time of each computational tasks management of computing platform is T e, the calculation task scheduling granularity T of selection gmeet formula (1):
min ( E t ( CT i , T g , m , n , T e ) ) ≈ min ( [ Σ i = 1 m CT i / T g Σ j = 1 n α j ] ( T g + T e ) ) - - - ( 1 )
Calculation task scheduling granularity should be more than or equal to the minimum expected execution time of this application function submission calculation task simultaneously, meets formula (2) constraint:
T g≥min{CT i} (2)
For improving parallel efficiency calculation, should reduce as far as possible management of computing dispatching platforms expense shared ratio in computation period, the scheduling overhead time performance factor of assumed calculation management platform is λ 0, calculation task scheduling granularity should meet formula (3):
T g/T e≥λ 0 (3)
What in S3, Fig. 2, step 3 was described is management of computing platform, according to the calculation task expected time of application function, forms the calculation task sequence of each application function;
What in S4, Fig. 2, step 4 was described is management of computing platform, according to the calculating priority level of application function, adds the calculation task sequence of each application function in the calculation task scheduling queue of management of computing platform;
What in S5, Fig. 2, step 5 was described is the principle of calculating according to computing node " the idle preferential scheduling that triggers ", calculation task scheduling queue is carried out to computational tasks tissue according to the expected time of the scheduling granularity of application function and calculation task, and the computational tasks of formation is added in computing job scheduling queue, distribute to successively correspondence in computer cluster and calculate in the computing node of idle condition.The concrete steps that computational tasks forms and dispatches are as follows:
S51, in calculation task scheduling queue, find first not yet to calculate maybe to need the calculation task that recalculates, and the application function that records this calculation task identifies a, suppose that current application function a has generated J computational tasks (completed and calculated or calculating), the calculation task number that each computational tasks comprises is N task(j) (1≤j≤J), is currently numbered c in computing nodes to be scheduled such as idle conditions, and newly-generated computational tasks numbering is designated as J+1;
S52, from calculation task scheduling queue, take out successively this application function a and not yet calculate and maybe need the calculation task that recalculates, if there is calculative calculation task k (1≤k≤m) in application function a, first judge whether formula (4) constraint condition meets, if meet constraint, this calculation task k is directly joined (specifically as Fig. 3 situation one is described) in this computational tasks J+1, carry out subsequent calculations task traversal; Otherwise, turn S53 and carry out the constraint judgement of computational tasks expected time; If all calculation tasks of this application function a have all added computational tasks to, turn S54 and carry out computational tasks and issue;
N task(J+1)+1≤α c (4)
S53, for the calculation task k of application function a, judge that it joins after computational tasks J+1, whether the expectation of computational tasks J+1 meets the scheduling granularity constraint of formula (5) computing time; If meet, this calculation task k is directly joined (specifically as Fig. 3 situation two is described) in computational tasks J+1, turn S52 and continue other follow-up calculation task of traversal applications function a; If do not meet the constraint of formula (5), then judge in calculation task scheduling queue whether also have other calculation task that need to dispatch of this application function a, if exist, turn S51 and proceed subsequent calculations task traversal; If do not exist, turn S54;
Σ i = 1 i = N task ( J + 1 ) CT i + CT k ≤ α c × T g - - - ( 5 )
S54, from the queue of management of computing platform computational tasks by be numbered J+1 computational tasks take out scheduling calculate to computing node c.
What in S6, Fig. 2, step 6 was described is after computing node has calculated, and transmits result of calculation to management node.After management node perception, reclaim and merge corresponding result of calculation, the computing node of loopback result of calculation is set to idle condition simultaneously, and trigger the idle scheduling of new computing node, if also have the calculation task not yet calculating, turn S5; Until the scheduling of the calculation task of all application functions is complete; For the computational tasks that is recovered to result of calculation, judge application function under it whether the result of calculation of all calculation tasks all return, if all return, return to this application function the information of calculating, judge whether that according to the calculation process of system new application function meets entry condition simultaneously, if have, start the application function that meets entry condition, turn S2; If do not meet the application function of entry condition, and after other application function that meets entry condition all calculated, this flow process is calculated and is finished.
By reference to the accompanying drawings embodiments of the invention are described above; but the present invention is not limited to above-mentioned embodiment; above-mentioned embodiment is only schematic; instead of restrictive; those of ordinary skill in the art is under enlightenment of the present invention; not departing from the scope situation that aim of the present invention and claim protect, also can make a lot of forms, within these all belong to protection of the present invention.

Claims (3)

1. calculate many application functions asynchronism and concurrency dispatching method, the management of computing platform that comprises management node and multiple computing nodes be provided, it is characterized in that, comprise the following steps:
S1, described management node receive computational data, start the application function that meets entry condition; If the application function starting need to be submitted calculation task to management of computing platform, go to step S2; If not, start subsequent applications function;
S2, described management of computing platform receive the calculation task of submitting to, calculate the calculation task scheduling granularity T of application function described in each according to formula (1), (2) and (3) g, wherein, the number of the calculation task of the application function of submitting to is m, the expected time of each calculation task is CT i(1≤i≤m), have n computing node in described management of computing platform, each described computing node can move α simultaneously j(1≤j≤n) individual calculation task, the scheduling overhead time of each computational tasks is T e, λ 0for the threshold value of default management of computing dispatching platforms overhead time performance factor; Formula (1) has defined E computing time that makes this application function tminimum optimization aim function; Formula (2) has defined the calculation task scheduling granularity T of application function gbe more than or equal to the constraint of the minimum of computation task expected time of this application function; Formula (3) has defined the performance constraints of computing time and the scheduling overhead time of application function;
min ( E t ( CT i , T g , m , n , T e ) ) ≈ min ( [ Σ i = 1 m CT i / T g Σ j = 1 n α j ] ( T g + T e ) ) - - - ( 1 )
T g≥min{CT i} (2)
T g/T e≥λ 0 (3)
S3, described management of computing platform, according to the calculation task expected time of application function, form the calculation task sequence of each application function;
S4, the management of computing platform calculating priority level default according to application function, add the calculation task sequence of each application function in the calculation task scheduling queue of management of computing platform;
S5, calculation task scheduling queue is formed to the computational tasks that is assigned to corresponding described computing node according to expected time of the scheduling granularity of application function and calculation task, and described computational tasks is added in computing job scheduling queue, distribute to successively correspondence in management of computing platform and calculate in the computing node of idle condition;
S6, after computing node has calculated, to management node transmit result of calculation.After management node perception, reclaim and merge corresponding result of calculation, the computing node of loopback result of calculation is set to idle condition simultaneously, and trigger the idle scheduling of new computing node, if also have the calculation task not yet calculating, go to step S5; Until the scheduling of the calculation task of all application functions is complete; For the computational tasks that is recovered to result of calculation, judge application function under it whether the result of calculation of all calculation tasks all return, if all return, return to this application function the information of calculating, judge whether that according to the calculation process of system new application function meets entry condition simultaneously, if have, start the application function that meets entry condition, go to step S2; If do not meet the application function of entry condition, and after other application function that meets entry condition all calculated, this flow process is calculated and is finished.
2. many application functions of calculating asynchronism and concurrency dispatching method according to claim 1, is characterized in that, described step S5 comprises following sub-step:
S51, in calculation task scheduling queue, find first not yet to calculate maybe to need the calculation task that recalculates, and the application function that records this calculation task identifies a, suppose that current application function a has generated J computational tasks, the calculation task number that each computational tasks comprises is N task(l) (1≤l≤J), is currently numbered c in computing nodes to be scheduled such as idle conditions, and newly-generated computational tasks numbering is designated as J+1;
S52, from calculation task scheduling queue, take out successively this application function a and not yet calculate and maybe need the calculation task that recalculates, if there is calculative calculation task k (1≤k≤m) in application function a, first judge whether formula (4) constraint condition meets, if meet constraint, this calculation task k is directly joined in this computational tasks J+1, carry out subsequent calculations task traversal; Otherwise, go to step S53 and carry out the constraint judgement of computational tasks expected time; If all calculation tasks of this application function a have all added computational tasks to, go to step S54 and carry out computational tasks and issue; Wherein, the calculation task quantity constraint condition that formula (4) has defined current calculation task k directly adds computational tasks J+1 to, the calculation task quantity that current computational tasks J+1 comprises should be less than moves calculation procedure number α on the computing node being assigned to c(1≤c≤n);
N task(J+1)+1≤α c (4)
S53, for the calculation task k of application function a, judge that it joins after computational tasks J+1, whether the expectation of computational tasks J+1 meets the scheduling granularity constraint of formula (5) computing time; If meet, this calculation task k is directly joined in computational tasks J+1, go to step S52 and continue other follow-up calculation task of traversal applications function a; If do not meet the constraint of formula (5), then judge in calculation task scheduling queue whether also have other calculation task that need to dispatch of this application function a, if exist, go to step S51 and proceed subsequent calculations task traversal; If do not exist, go to step S54; Wherein, formula (5) definition calculation task k joins the expected time constraint condition of computational tasks J+1, and the calculation task expected time summation that current computational tasks J+1 comprises should be less than described calculation task scheduling granularity T gα cdoubly:
Σ i = 1 i = N task ( J + 1 ) CT i + CT k ≤ α c × T g - - - ( 5 )
S54, from the queue of high-performance calculation management platform computational tasks by be numbered J+1 computational tasks take out scheduling calculate to computing node c.
3. many application functions of calculating asynchronism and concurrency dispatching method according to claim 1, is characterized in that, in described step S4, described calculating priority level comprises operation trigger condition, running priority level, the operation resource consumption weights of described application function.
CN201410401606.6A 2014-08-14 2014-08-14 Method for computing asynchronous and concurrent scheduling of multiple application functions Pending CN104166593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410401606.6A CN104166593A (en) 2014-08-14 2014-08-14 Method for computing asynchronous and concurrent scheduling of multiple application functions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410401606.6A CN104166593A (en) 2014-08-14 2014-08-14 Method for computing asynchronous and concurrent scheduling of multiple application functions

Publications (1)

Publication Number Publication Date
CN104166593A true CN104166593A (en) 2014-11-26

Family

ID=51910425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410401606.6A Pending CN104166593A (en) 2014-08-14 2014-08-14 Method for computing asynchronous and concurrent scheduling of multiple application functions

Country Status (1)

Country Link
CN (1) CN104166593A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718312A (en) * 2016-01-20 2016-06-29 华南理工大学 Multi-queue back-filling job scheduling method oriented to living organism gene sequencing calculation task
CN106547857A (en) * 2016-10-20 2017-03-29 中国科学院声学研究所 With reference to heartbeat and the data digging method and device of granularity
CN111367631A (en) * 2019-07-12 2020-07-03 北京关键科技股份有限公司 High-throughput storage access device based on multi-node asynchronous concurrence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063336A (en) * 2011-01-12 2011-05-18 国网电力科学研究院 Distributed computing multiple application function asynchronous concurrent scheduling method
CN102541959A (en) * 2010-12-31 2012-07-04 中国移动通信集团安徽有限公司 Method, device and system for scheduling electron transport layer (ETL)
CN102591712A (en) * 2011-12-30 2012-07-18 大连理工大学 Decoupling parallel scheduling method for rely tasks in cloud computing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102541959A (en) * 2010-12-31 2012-07-04 中国移动通信集团安徽有限公司 Method, device and system for scheduling electron transport layer (ETL)
CN102063336A (en) * 2011-01-12 2011-05-18 国网电力科学研究院 Distributed computing multiple application function asynchronous concurrent scheduling method
CN102591712A (en) * 2011-12-30 2012-07-18 大连理工大学 Decoupling parallel scheduling method for rely tasks in cloud computing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105718312A (en) * 2016-01-20 2016-06-29 华南理工大学 Multi-queue back-filling job scheduling method oriented to living organism gene sequencing calculation task
CN105718312B (en) * 2016-01-20 2018-10-30 华南理工大学 More queues that calculating task is sequenced towards biological gene backfill job scheduling method
CN106547857A (en) * 2016-10-20 2017-03-29 中国科学院声学研究所 With reference to heartbeat and the data digging method and device of granularity
CN106547857B (en) * 2016-10-20 2019-09-27 中国科学院声学研究所 In conjunction with the data digging method and device of heartbeat and granularity
CN111367631A (en) * 2019-07-12 2020-07-03 北京关键科技股份有限公司 High-throughput storage access device based on multi-node asynchronous concurrence

Similar Documents

Publication Publication Date Title
CN102063336B (en) Distributed computing multiple application function asynchronous concurrent scheduling method
CN105117286B (en) The dispatching method of task and streamlined perform method in MapReduce
CN102567080B (en) Virtual machine position selection system facing load balance in cloud computation environment
CN102611622B (en) Dispatching method for working load of elastic cloud computing platform
CN103607466B (en) A kind of wide-area multi-stage distributed parallel grid analysis method based on cloud computing
CN102521055B (en) Virtual machine resource allocating method and virtual machine resource allocating system
CN107168770B (en) Low-energy-consumption cloud data center workflow scheduling and resource supply method
CN105022670A (en) Heterogeneous distributed task processing system and processing method in cloud computing platform
Malik et al. An optimistic parallel simulation protocol for cloud computing environments
CN103401939A (en) Load balancing method adopting mixing scheduling strategy
CN104536827A (en) Data dispatching method and device
CN103164190A (en) Rapid parallelization method of totally-distributed type watershed eco-hydrology model
CN103377032A (en) Fine granularity scientific computation parallel processing device on basis of heterogenous multi-core chip
CN106293947B (en) GPU-CPU (graphics processing Unit-Central processing Unit) mixed resource allocation system and method in virtualized cloud environment
CN103473120A (en) Acceleration-factor-based multi-core real-time system task partitioning method
CN102637138A (en) Method for computing and scheduling virtual machine
CN103034534A (en) Electric power system analysis parallel computing method and system based on grid computation
CN103685492B (en) Dispatching method, dispatching device and application of Hadoop trunking system
CN101639788A (en) Multi-core parallel method for continuous system simulation based on TBB threading building blocks
CN104166593A (en) Method for computing asynchronous and concurrent scheduling of multiple application functions
Sharma et al. Dynamic load balancing algorithm for heterogeneous multi-core processors cluster
CN105117281A (en) Task scheduling method based on task application signal and execution cost value of processor core
Yang et al. Study on static task scheduling based on heterogeneous multi-core processor
CN104298564B (en) Dynamic equilibrium heterogeneous system loading computing method
Salmani et al. A fuzzy-based multi-criteria scheduler for uniform multiprocessor real-time systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20141126