CN105740059B - A kind of population dispatching method towards Divisible task - Google Patents

A kind of population dispatching method towards Divisible task Download PDF

Info

Publication number
CN105740059B
CN105740059B CN201410768968.9A CN201410768968A CN105740059B CN 105740059 B CN105740059 B CN 105740059B CN 201410768968 A CN201410768968 A CN 201410768968A CN 105740059 B CN105740059 B CN 105740059B
Authority
CN
China
Prior art keywords
task
subtask
particle
fitness
population
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410768968.9A
Other languages
Chinese (zh)
Other versions
CN105740059A (en
Inventor
尤佳莉
乔楠楠
刘学
齐卫宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Acoustics CAS
Beijing Hili Technology Co Ltd
Shanghai 3Ntv Network Technology Co Ltd
Original Assignee
Institute of Acoustics CAS
Beijing Hili Technology Co Ltd
Shanghai 3Ntv Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CAS, Beijing Hili Technology Co Ltd, Shanghai 3Ntv Network Technology Co Ltd filed Critical Institute of Acoustics CAS
Priority to CN201410768968.9A priority Critical patent/CN105740059B/en
Publication of CN105740059A publication Critical patent/CN105740059A/en
Application granted granted Critical
Publication of CN105740059B publication Critical patent/CN105740059B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention relates to a kind of population dispatching method towards Divisible task, it include: after task to be scheduled is divided into subtask, using the task allocation plan that is randomly generated as a particle, using the corresponding time performance of task allocation plan as the fitness of particle, the speed being mutually shifted between particle is calculated with the difference between particle fitness, multiple evolution is done to population, the best particle of fitness is selected from the result repeatedly evolved;Overhead value is finally combined, subtask scheduling is done in each subtask in task allocation plan corresponding to the particle best to fitness.

Description

A kind of population dispatching method towards Divisible task
Technical field
The present invention relates to computer networking technology, in particular to a kind of population dispatching method towards Divisible task.
Background technique
There are many common method for scheduling task, and a portion method is that task is considered as to the whole progress that can not be split Scheduling, such methods can be listed below:
Min-Min algorithm predicts minimum completion of each task on each processor in current task queue first Then task with minimum completion time is distributed to corresponding processor by the time, while updating corresponding processor just The thread time is removed assigned task from task queue, such remaining task of duplicate allocation, until entire task queue is It is empty.Easily there is load imbalance phenomenon in Min-Min algorithm.
Max-Min algorithm and Min-Min algorithm the difference is that, determining each task on each processor After earliest finish time, the task with maximum earliest finish time is distributed into corresponding processor, and timely update Corresponding processor ready time, reprocesses remaining task.Max-Min algorithm ratio in terms of load balancing Min-Min algorithm makes moderate progress.
Promethee algorithm at task end, according to the customized standard of user (such as task scale, in current processor On prediction execute one of used time, the indexs such as cost, be also possible to for many indexes to be weighted the synthesis that processing obtains Performance indicator) pending task is subjected to priority ranking;At processor end, machine state is monitored in real time, once there is machine There is idle state, is just sorted according to the task priority obtained in advance and the task of highest priority is assigned to current idle Machine gets on.Emulation shows to suitably adjust the weight between each performance indicator, and algorithm can be made to realize various performances most It is excellent.
Separately there is certain methods proposition that task is divided into multiple subtasks to be dispatched one by one, but the task object of analysis It is only limitted to some specific tasks, be not related to high-volume task while being occurred, a variety of partitioning schemes and the situation deposited, this Class method can be listed below:
Timing requirements between subtask are analyzed first to the genetic algorithm of timing correlator task Parallel Scheduling, to institute There is the time depth value when execution of subtask to be ranked up.Then several " subtask-node " allocation matrix is generated at random, often One kind " subtask-node " matrix is a kind of allocation plan.The thinking of algorithm is randomly generated several allocation plan and constitutes Initial population, and to the individual in population carry out variation and screening operation, be allowed to by generation improve, thus obtain it is new, complete when Between shorter scheme.By much for genetic algorithm after, it can be deduced that it is stable, preferably solve.But genetic algorithm is answered Miscellaneous degree is higher, will cause very big calculation delay in the case that total task number is more in a network.
EDTS algorithm is the method for carrying out optimal scheduling for the N step task inside a task, and algorithm is predicted first Each subtask executes the time it takes and energy consumption on all machines out, is then the total cut-off of this succession of task setting Time, in conjunction with existing sequential relationship, finds out the subtask method of salary distribution as energy-efficient as possible under fixed total deadline, But EDTS algorithm is split just for a task, is dispatched, and is accomplished that the best performance of a task itself, is worked as network In when there is broad medium task, the mutual waiting time as caused by temporal constraint is longer between subtask, each task Local optimum is contradictory with whole optimization.
Summary of the invention
It is an object of the invention to overcome, subtask scheduling method complexity in the prior art is high, calculation delay is big, energy The defects of high is consumed, to provide a kind of subtask scheduling method of comprehensive measure time, energy consumption.
To achieve the goals above, the present invention provides a kind of population dispatching method towards Divisible task, comprising:
After task to be scheduled is divided into subtask, using the task allocation plan that is randomly generated as a particle, with Fitness of the corresponding time performance of task allocation plan as particle is calculated between particle with the difference between particle fitness The speed being mutually shifted does multiple evolution to population, and the best particle of fitness is selected from the result repeatedly evolved;Finally In conjunction with overhead value, subtask scheduling is done in each subtask in task allocation plan corresponding to the particle best to fitness.
In above-mentioned technical proposal, this method is specifically included:
Step 1), the instruction strip number for collecting multiple subtasks after the instruction strip number of scheduler task, segmentation, subtask it Between Temporal dependency relationship, the file block size of each task;
Step 2) acquires the speed of service MIPS of each server in cluster, unit time electricity charge expense CPS, currently bears It carries, earliest idle moment EST, the bandwidth information between calculation server;
Step 3) will carry out descending sort according to instruction strip number to scheduler task in queue, for head of the queue wait dispatch Task successively executes step 4)-step 10), until the needed scheduler task in queue has been processed into;
Step 4) is decomposed to current to scheduler task, obtains subtask structure chart, and to be described to scheduler task N number of allocation plan is randomly generated, forms initial population;
Wherein, the subtask structure chart reflects the sequential relationship before being broken down into subtask to scheduler task;
The initial population includes the N number of particle P generated at random1,P2,...PN, each particle represents " a task- Server " allocation plan, if the sum of server is M, each particle PnIt is expressed as a L*M matrix Sn: where 1≤n≤ The size of N, N are set according to the number of servers in cluster, the subtask number to include in scheduler task, and L is indicated wait adjust The number of obtained subtask after degree Task-decomposing;
Step 5), the time performance Makespan for calculating N number of particle, obtain the fitness of N number of particle;
Step 6) calculates movement speed of the particle in new round iteration, and each particle is once evolved, is obtained down Generation population;
Step 7) judges whether current evolution number is less than preset value, if so, re-executeing the steps 5), otherwise, executes Step 8);
Step 8) filters out the highest particle P of fitness from evolution results all beforebest
Step 9), the corresponding weight P of setting overhead performance, carry out the task immigration based on expense Cost;
Step 10) exports the task immigration of step 9) as a result, carrying out subtask scheduling by this result.
In above-mentioned technical proposal, in step 5), time performance Makespan is calculated according to following principle:
It a, must be after the finish time of its all forerunner subtask at the beginning of subtask;
B, subtask must receive the output file of its all forerunner subtask as its input file;
C, subtask must start in the case where being currently located processor idle states;
D, subtask must start as early as possible in the case where meeting above-mentioned condition a-c condition;
E, subtask TaskjIt is Task_MI the time required to executingj/MIPSk
F, a task complete time span Makespan be its first subtask at the beginning of with the last one Time span between the finish time of subtask.
In above-mentioned technical proposal, in step 5), the fitness of n-th of particle is calculated according to following formula (2) Fitnessn:
Fitnessn=max { Makespan1,Makespan2,...MakespanN}-MakespannFormula (2).
In above-mentioned technical proposal, the step 6) includes:
It is P by the maximum particle label of fitness valuebest, the corresponding allocation matrix of the particle is labeled as Sbest
Define particle PnTo PbestMovement speed is Vn,best, then shown in for example following formula of its calculation formula (3):
Vn,best=(fitnessbest-fitnn)/fitnesszFormula (3);
In the primary evolution of population, each particle PnAll with speed Vn,bestTo PbestShifting moves a step, and what is obtained is N number of new Particle form next-generation population.
In above-mentioned technical proposal, the step 9) includes:
The best particle P of fitness in filtering out current particle groupbestAfterwards, for current subtask, it is calculated at it The time performance and overhead performance executed on its any server, and the performance is calculated compared to PbestGain delta C and Δ FT, Wherein C indicates the cost when executing on a certain server, and FT indicates the completion moment executed on a certain server, CbaseWith FTbaseCalculated C and FT value before being task immigration;
For Servers-all, finds out and calculate the obtained maximum server of value according to the following formula (4), it then will be sub On task assignment to the server:
P*ΔC/Cbase+(1-P)*ΔFT/FTbaseFormula (4).
The present invention has the advantages that
1, method of the invention has carried out the considerations of many-sided performance factor, when assessing scheduling scheme, not only considers it Total time performance, and using indexs such as load balancing in the overhead of task execution, cluster as the factor considered, this to appoint Business distribution is more reasonable, efficient;
2, the scene that method of the invention had both considered batch tasks request while having occurred, it is contemplated that between subtask Temporal dependency, the cutting operation of subtask, which solves software license resource constraint bring on the virtual machine in cloud environment, to be limited to Property.
Detailed description of the invention
Fig. 1 is the schematic diagram of network environment applied by the method for the present invention in one embodiment;
Fig. 2 is the schematic diagram of serial sequential relationship between subtask;
Fig. 3 is the schematic diagram of parallel sequential relationship between subtask;
Fig. 4 is the schematic diagram of mixed type sequential relationship between subtask;
Fig. 5 is the flow chart of the method for the present invention.
Specific embodiment
Now in conjunction with attached drawing, the invention will be further described.
Method of the invention is directed to batch, the alienable task requests that client is initiated, and is formed after the segmentation of consideration task Subtask granularity and subtask between sequential relationship, and combine server cluster in different processor computing capability and function The factors such as rate, Internet bandwidth are scheduled batch tasks, so that scheduling result is equal in total time and two aspect of charge costs With preferable performance.
Before elaborating to method of the invention, an introduction is done to concept involved in the method for the present invention first.
Method of the invention should meet following three preconditions before use:
1, the scheduler in a region is responsible for the traffic control of all terminal tasks in the region, and the task that terminal is initiated needs Wait in line the decision of scheduler;
2, the batch tasks handled by the present invention are alienable task (may be partitioned into multiple subtasks), and son is appointed There is serial, parallel or mixed type sequential relationship between business.Forerunner subtask if it exists, one subtask, then need to wait for its institute After the completion of having predecessor task, the subtask can be executed;
3, described in condition 2 as in the previous, meet the necessary condition that the sequential relationship between subtask is subtask operation, when When one subtask and its some forerunner subtask are assigned to different server execution, needing will by internet or local area network Server is as input file where the output file of forerunner subtask is transmitted to subsequent subtask;If some subtask and its before It drives subtask and is assigned to same server, be then not required to carry out file transmission.
It is in figs. 2,3 and 4 respectively serial sequential relationship between subtask described in condition 2, parallel sequential relationship, mixed The schematic diagram of mould assembly sequential relationship.
Method of the invention pursues following two performance indicators:
1, the overall deadline of current pending batch tasks;
2, overhead needed for executing batch tasks, including charge on traffic and the processor electricity charge.
Method of the invention mainly considers following three factors in performance evaluation:
1, influence of the different CPU for executing speed for task completion time;
2, influence of the network bandwidth that network topology structure is determined to I/O file propagation delay time;
3, influence of the CPU of different capacity for power consumption and the corresponding electricity charge.
Symbol involved in the method for the present invention is defined as follows:
Jobi: i-th of task;
Taskj: j-th of subtask;
Serverk: k-th of server;
Job_MIi: JobiInstruction strip number (million);
Makespani: JobiTerminate practical time delay experienced from initiating to executing;
Task_MIj: TaskiInstruction strip number (million);
MIPSk: ServerkThe instruction strip number (million/second) of operation per second;
CPSk: ServerkExpense per second in the operational mode;
ECTk: ServerkEarliest idle moment.
It in the embodiment shown in fig. 1, include 3 server clusters, cluster topology structure and bandwidth condition in network As shown in Figure 1.
With reference to Fig. 5, the method for the present invention includes the following steps:
Step 1), the instruction strip number for collecting multiple subtasks after the instruction strip number of scheduler task, segmentation, subtask it Between Temporal dependency relationship, the file block size of each task;
Step 2) acquires the speed of service MIPS of each server in cluster, unit time electricity charge expense CPS, currently bears It carries, earliest idle moment EST, the bandwidth information between calculation server;
Step 3) will carry out descending sort according to instruction strip number to scheduler task in queue, for head of the queue wait dispatch Task successively executes step 4)-step 10), until the needed scheduler task in queue has been processed into;
Step 4) is decomposed to current to scheduler task, obtains subtask structure chart, and to be described to scheduler task N number of allocation plan is randomly generated, forms initial population;
The subtask structure chart reflects the sequential relationship before being broken down into subtask to scheduler task, i.e. forerunner- Subsequent relationship, subtask structure chart can guarantee that in subsequent execution scheduling process, any subtask is all in its all forerunner Start again after the completion of task.
In initial population, N number of particle P is generated at random1,P2,...PN, each particle represents " a task-service Device " allocation plan, if the sum of server is M, each particle Pn(1≤n≤N) is represented by a L*M matrix Sn: where The size of N is set according to the number of servers in cluster, the subtask number to include in scheduler task, and L is indicated wait dispatch The number of obtained subtask after Task-decomposing;
Step 5), the Makespan for calculating N number of particle, obtain the fitness of N number of particle;
In this step, time performance Makespan is calculated according to following principle for each particle (i.e. each task):
It a, must be after the finish time of its all forerunner subtask at the beginning of subtask;
B, subtask must receive the output file of its all forerunner subtask as its input file;
C, subtask must start in the case where being currently located processor idle states;
D, subtask must start as early as possible in the case where meeting above-mentioned condition a-c condition;
E, subtask TaskjIt is Task_MI the time required to executingj/MIPSk
F, a task complete time span Makespan be its first subtask at the beginning of with the last one Time span between the finish time of subtask.
After the Makespan for obtaining N number of particle, the fitness of n-th of particle is calculated according to following formula (2) Fitnessn:
Fitnessn=max { Makespan1,Makespan2,...MakespanN}-MakespannFormula (2)
The respective fitness of N number of particle can be obtained according to the formula.
Step 6) calculates movement speed of the particle in new round iteration, and each particle is once evolved, is obtained down Generation population;
It, will the wherein maximum particle of fitness value after calculating the respective fitness value of N number of particle in a previous step Labeled as Pbest, the corresponding allocation matrix of the particle is labeled as Sbest, define particle PnTo PbestMovement speed is Vn,best, then shown in for example following formula of its calculation formula (3):
Vn,best=(fitnessbest-fitnn)/fitnessbFormula (3)
Vn,bestIt can be regarded as matrix SnIn element variation be SbestProbability, population it is primary evolution in, each Particle PnAll with speed Vn,bestTo PbestShifting moves a step, and the N number of new particle obtained will form next-generation population.
Whether step 7), current evolution number are less than preset value, if so, re-executeing the steps 5), otherwise, execute step 8);The preset value determines that subtask number, number of servers are more, the preset value according to subtask number, number of servers With regard to the bigger of setting.
Step 8) filters out the highest particle P of fitness from evolution results all beforebest
Step 9), the corresponding weight P of setting overhead performance (empirical value obtained according to multiple test result), are based on The task immigration of expense Cost;
The best particle P of fitness in filtering out current particle groupbestAfterwards, for current subtask, it is calculated at it The time performance and overhead performance executed on its any server, and the performance is calculated compared to PbestGain delta C and Δ FT, Wherein C indicates the cost (including charge on traffic and processor electricity charge) when executing on a certain server, and FT is indicated in a certain clothes The completion moment executed on business device, CbaseAnd FTbaseCalculated C and FT value before being task immigration.
For Servers-all, finds out and calculate the obtained maximum server of value according to the following formula (4), it then will be sub On task assignment to the server:
P*ΔC/Cbase+(1-P)*ΔFT/FTbaseFormula (4)
Step 10) exports the task immigration of step 9) as a result, carrying out subtask scheduling by this result.
It should be noted last that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting.Although ginseng It is described the invention in detail according to embodiment, those skilled in the art should understand that, to technical side of the invention Case is modified or replaced equivalently, and without departure from the spirit and scope of technical solution of the present invention, should all be covered in the present invention Scope of the claims in.

Claims (5)

1. a kind of population dispatching method towards Divisible task, comprising:
After task to be scheduled is divided into subtask, using the task allocation plan that is randomly generated as a particle, with task Fitness of the corresponding time performance of allocation plan as particle is calculated between particle mutually with the difference between particle fitness Mobile speed does multiple evolution to population, and the best particle of fitness is selected from the result repeatedly evolved;Finally combine Subtask scheduling is done in overhead value, each subtask in task allocation plan corresponding to the particle best to fitness;
This method specifically includes:
Step 1), the instruction strip number for collecting multiple subtasks after the instruction strip number of scheduler task, segmentation, between subtask Temporal dependency relationship, the file block size of each task;
The speed of service MIPS of each server in step 2), acquisition cluster, unit time electricity charge expense CPS, present load, most Early idle moment EST, the bandwidth information between calculation server;
Step 3) will carry out descending sort according to instruction strip number to scheduler task in queue, for head of the queue to scheduler task, Step 4)-step 10) is successively executed, until the needed scheduler task in queue has been processed into;
Step 4) is decomposed to current to scheduler task, and subtask structure chart is obtained, and is described random to scheduler task N number of allocation plan is generated, initial population is formed;
Wherein, the subtask structure chart reflects the sequential relationship before being broken down into subtask to scheduler task;
The initial population includes the N number of particle P generated at random1,P2,...PN, each particle represents " a task-service Device " allocation plan, if the sum of server is M, each particle PnIt is expressed as a L*M matrix Sn: where 1≤n≤N, N's Size is set according to the number of servers in cluster, the subtask number to include in scheduler task, and L is indicated to scheduler task The number of obtained subtask after decomposition;
TaskjFor j-th of subtask, ServerkFor k-th of server;
Step 5), the time span Makespan for calculating N number of particle, obtain the fitness of N number of particle;
Step 6) calculates movement speed of the particle in new round iteration, and each particle is once evolved, the next generation is obtained Population;
Step 7) judges whether current evolution number is less than preset value, if so, re-executeing the steps 5), otherwise, executes step 8);
Step 8) filters out the highest particle P of fitness from evolution results all beforebest
Step 9), the corresponding weight P of setting overhead performance, carry out the task immigration based on expense Cost;
Step 10) exports the task immigration of step 9) as a result, carrying out subtask scheduling by this result.
2. the population dispatching method according to claim 1 towards Divisible task, which is characterized in that in step 5) In, time span Makespan is calculated according to following principle:
It a, must be after the finish time of its all forerunner subtask at the beginning of subtask;
B, subtask must receive the output file of its all forerunner subtask as its input file;
C, subtask must start in the case where being currently located processor idle states;
D, subtask must start as early as possible in the case where meeting above-mentioned condition a-c condition;
E, subtask TaskjIt is Task_MI the time required to executingj/MIPSk;Task_MIjFor j-th of subtask TaskjInstruction Item number;MIPSkFor ServerkThe instruction strip number of operation per second;
F, appoint at the beginning of the time span Makespan that a task is completed is its first subtask with last height Time span between the finish time of business.
3. the population dispatching method according to claim 1 towards Divisible task, which is characterized in that in step 5) In, the fitness Fitness of n-th of particle is calculated according to following formula (2)n:
Fitnessn=max { Makespan1,Makespan2,...MakespanN}-MakespannFormula (2)
MakespannFor the time span of n-th of particle, 1≤n≤N.
4. the population dispatching method according to claim 1 towards Divisible task, which is characterized in that the step 6) Include:
It is P by the maximum particle label of fitness valuebest, the corresponding allocation matrix of the particle is labeled as Sbest
Define particle PnTo PbestMovement speed is Vn,best, then shown in for example following formula of its calculation formula (3):
Vn,best=(fitnessbest-fitnessn)/fitnessbest
Formula (3);fitnessbestFor the fitness of the maximum particle of fitness value;
In the primary evolution of population, each particle PnAll with speed Vn,bestTo PbestShifting moves a step, the N number of new grain obtained Son forms next-generation population.
5. the population dispatching method according to claim 1 towards Divisible task, which is characterized in that the step 9) Include:
The best particle P of fitness in filtering out current particle groupbestAfterwards, for current subtask, it is calculated other any The time performance and overhead performance executed on server, and the performance is calculated compared to PbestGain delta C and Δ FT, wherein C Indicate the cost when executing on a certain server, FT indicates the completion moment executed on a certain server, CbaseAnd FTbase Calculated C and FT value before being task immigration;
For Servers-all, finds out and calculate the obtained maximum server of value according to the following formula (4), then by subtask It is assigned on the server:
P*ΔC/Cbase+(1-P)*ΔFT/FTbaseFormula (4).
CN201410768968.9A 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task Expired - Fee Related CN105740059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410768968.9A CN105740059B (en) 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410768968.9A CN105740059B (en) 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task

Publications (2)

Publication Number Publication Date
CN105740059A CN105740059A (en) 2016-07-06
CN105740059B true CN105740059B (en) 2018-12-04

Family

ID=56240898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410768968.9A Expired - Fee Related CN105740059B (en) 2014-12-11 2014-12-11 A kind of population dispatching method towards Divisible task

Country Status (1)

Country Link
CN (1) CN105740059B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371908A (en) * 2016-08-31 2017-02-01 武汉鸿瑞达信息技术有限公司 Optimization method for image/video filter task distribution based on PSO (Particle Swarm Optimization)
CN107729130A (en) * 2017-09-20 2018-02-23 昆明理工大学 A kind of time point based on information physical system does not know task-dynamic dispatching method
CN107613025B (en) * 2017-10-31 2021-01-08 武汉光迅科技股份有限公司 Message queue sequence reply-based implementation method and device
CN110928651B (en) * 2019-10-12 2022-03-01 杭州电子科技大学 Service workflow fault-tolerant scheduling method under mobile edge environment
CN111414198B (en) * 2020-03-18 2023-05-02 北京字节跳动网络技术有限公司 Request processing method and device
TWI749992B (en) * 2021-01-06 2021-12-11 力晶積成電子製造股份有限公司 Wafer manufacturing management method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604258A (en) * 2009-07-10 2009-12-16 杭州电子科技大学 A kind of method for scheduling task of embedded heterogeneous multiprocessor system
CN102724220A (en) * 2011-03-29 2012-10-10 无锡物联网产业研究院 Method and apparatus for task cooperation, and system for internet of things
CN103019822A (en) * 2012-12-07 2013-04-03 北京邮电大学 Large-scale processing task scheduling method for income driving under cloud environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604258A (en) * 2009-07-10 2009-12-16 杭州电子科技大学 A kind of method for scheduling task of embedded heterogeneous multiprocessor system
CN102724220A (en) * 2011-03-29 2012-10-10 无锡物联网产业研究院 Method and apparatus for task cooperation, and system for internet of things
CN103019822A (en) * 2012-12-07 2013-04-03 北京邮电大学 Large-scale processing task scheduling method for income driving under cloud environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Self-Adaptive Learning PSO-Based Deadline Constrained Task Scheduling for Hybrid IaaS Cloud;Xingquan Zuo等;《IEEE Transactions on Automation Science and Engineering》;20140430;第11卷(第2期);564-573 *
基于改进粒子群算法的云计算任务调度算法;张陶等;《计算机工程与应用》;20131001;第49卷(第19期);68-72 *
粒子群算法求解任务可拆分项目调度问题;邓林义等;《控制与决策》;20080615;第23卷(第6期);681-684 *

Also Published As

Publication number Publication date
CN105740059A (en) 2016-07-06

Similar Documents

Publication Publication Date Title
CN105740059B (en) A kind of population dispatching method towards Divisible task
CN108829494B (en) Container cloud platform intelligent resource optimization method based on load prediction
CN103605567B (en) Cloud computing task scheduling method facing real-time demand change
US10031774B2 (en) Scheduling multi-phase computing jobs
CN105718479B (en) Execution strategy generation method and device under cross-IDC big data processing architecture
CN103729248B (en) A kind of method and apparatus of determination based on cache perception task to be migrated
CN104765640B (en) A kind of intelligent Service dispatching method
CN103593323A (en) Machine learning method for Map Reduce task resource allocation parameters
CN111274036B (en) Scheduling method of deep learning task based on speed prediction
CN103699433B (en) One kind dynamically adjusts number of tasks purpose method and system in Hadoop platform
CN112685153A (en) Micro-service scheduling method and device and electronic equipment
CN110347602A (en) Multitask script execution and device, electronic equipment and readable storage medium storing program for executing
Quang-Hung et al. Epobf: energy efficient allocation of virtual machines in high performance computing cloud
CN115220898A (en) Task scheduling method, system, device and medium based on deep reinforcement learning
Fathalla et al. Best-KFF: a multi-objective preemptive resource allocation policy for cloud computing systems
CN102184124B (en) Task scheduling method and system
Iglesias et al. A methodology for online consolidation of tasks through more accurate resource estimations
CN111367632A (en) Container cloud scheduling method based on periodic characteristics
Yakubu et al. Priority based delay time scheduling for quality of service in cloud computing networks
CN111506407B (en) Resource management and job scheduling method and system combining Pull mode and Push mode
Haque et al. A priority-based process scheduling algorithm in cloud computing
CN106648866A (en) KVM platform meeting task time limit requirements-based resource scheduling method
Chen et al. Pickyman: A preemptive scheduler for deep learning jobs on gpu clusters
Bhutto et al. Analysis of Energy and Network Cost Effectiveness of Scheduling Strategies in Datacentre
Pan Heterogeneous High-Performance System Algorithm Based on Computer Big Data Technology

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181204

CF01 Termination of patent right due to non-payment of annual fee