CN103729246B - Method and device for dispatching tasks - Google Patents

Method and device for dispatching tasks Download PDF

Info

Publication number
CN103729246B
CN103729246B CN201310750655.6A CN201310750655A CN103729246B CN 103729246 B CN103729246 B CN 103729246B CN 201310750655 A CN201310750655 A CN 201310750655A CN 103729246 B CN103729246 B CN 103729246B
Authority
CN
China
Prior art keywords
task
user
demand
cpu resource
memory source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310750655.6A
Other languages
Chinese (zh)
Other versions
CN103729246A (en
Inventor
刘璧怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201310750655.6A priority Critical patent/CN103729246B/en
Publication of CN103729246A publication Critical patent/CN103729246A/en
Application granted granted Critical
Publication of CN103729246B publication Critical patent/CN103729246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • General Factory Administration (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method for dispatching tasks. The method comprises the steps that a corresponding CPU resource demand and a corresponding internal storage resource demand are set for any task of a user, and the time when the task is finished is predicted; the number of tasks operated by the user at the same time is predicted so that the dominated share ratios of all users in a YARN platform are identical; the total time spent on a job is calculated according to the number of the total tasks of the user, the predicted time when the tasks are finished and the number of the tasks capable of being operated by the user at the same time; the set CPU resource demand and the internal storage resource demand are changed within the preset CPU resource demand range and the internal storage resource demand range, and the steps are executed repeatedly until the worked out total time spent on the operation is finished is minimum. The subjective deviation of setting the CPU resource demand and the internal storage resource demand of the user when the YARN platform submits the job can be reduced, the job finishing time can be shortened, and the overall operating efficiency of the YARN platform is improved.

Description

A kind of method for scheduling task and device
Technical field
The present invention relates in 2nd generation Hadoop YARN platforms optimization of job technology, a kind of more particularly to task scheduling side Method and device.
Background technology
Hadoop is current most popular big data handling implement.It realizes a set of distributed storage and computing system, It is particularly suitable for terabyte (TB, Terabyte), PB(Petabyte)The data processing of level, and by means of MapReduce frameworks, User can be made easily to write out distributed program, traditional business is moved on distributed platform.
Business-like product is the technology realization based on 1st generation Hadoop mostly at present, there is "bottleneck", money in use The problems such as source distribution is dumb, programming framework is single.2nd generation Hadoop researched and developed, to overcome disadvantages mentioned above, constructs New underlying platform YARN, is responsible for the resource allocation and task scheduling of cluster;YARN peels off MapReduce frameworks becomes independent Optional component, is no longer coupled with platform.On new scheduling of resource platform YARN, the dispatching algorithm of employing is that " domination resource is public It is flat "(DRF, Dominant Resource Fairness).The program that user submits to is referred to as into operation, each operation will be split It is divided into considerable task operation.User needs to specify cpu resource and the internal memory money taken during each task run when operation is submitted to Source.Scheduler on YARN platforms can calculate the cpu resource of each task successively and account for the ratio and memory source of CPU total resources and account for The ratio of internal memory total resources, and the big person of its ratio is taken as " the domination share ratio " of such subtask(Dominant Share).When multiple users submit multiple operations to simultaneously, scheduler can optionally start all kinds of subtasks, to ensure each use The general branch at family is with share than identical.
In above-mentioned dispatching algorithm, user needs to arrange cpu resource demand and memory source demand when operation is submitted to, if The cpu resource demand or memory source demand put directly influence the deadline of operation.The cpu resource demand of setting or internal memory Resource requirement is bigger, and each Runtime is shorter, but is limited by dispatching platforms device, while the number of tasks of operation can subtract It is few;The cpu resource demand or memory source demand of setting is fewer, and each Runtime is longer, but while the task of operation Quantity can become many.Therefore, resource requirement and task completion time and platform concurrent tasks number are not linear relationship.But at present The setting of parameter depends on user experience, for the user of rich experiences is lacked, does not ensure that rational parameter Arrange, so as to cause the job run time to be significantly greater than theoretially optimum value.
The content of the invention
In order to solve above-mentioned technical problem, the present invention proposes a kind of method for scheduling task and device, can eliminate setting The subjectivity of cpu resource demand and memory source demand, so as to shorten the operation deadline.
In order to achieve the above object, the present invention proposes a kind of method for scheduling task, and the method includes:
Corresponding cpu resource demand and memory source demand are arranged to any one task of user, and predicts the task Deadline;
The number of tasks that prediction user can run simultaneously so that the domination share of each user compares phase in YARN platforms Together;
The task that general assignment number, the deadline of task of prediction and the user of prediction according to user can be run simultaneously Number calculates the total time that operation is completed;
In default cpu resource range of needs and memory source range of needs, change the cpu resource demand and interior for arranging Resource requirement is deposited, repeat the above steps are minimum until the total time that calculated operation is completed.
Preferably, the deadline of the prediction task include:
Case library is generated, case library preserves the eigenvalue and the example of example in different cpu resource demands and internal memory Run time during resource requirement;
Cpu resource demand and the example of memory source demand all same in lookup case library with the task for arranging;
The similarity of the eigenvalue of the eigenvalue and the task of the example that calculating finds, by the example that similarity is maximum Run time as the task deadline.
Preferably, the cpu resource demand and memory source in searching less than the case library with the task for arranging During the example of demand all same, the method also includes:
Cpu resource demand or memory source demand identical example in lookup case library with the task for arranging.
Preferably, the eigenvalue includes mapping Map/ abbreviation Reduce, task type, data volume and complexity.
Preferably, the cpu resource demand or memory source in searching less than the case library with the task for arranging During the example of demand all same, the method also includes:
According to the task type of the task using the run time of the default instance in the case library as the task Deadline.
Preferably, the domination share ratio for user cpu resource usage amount/min (total cpu resource demand of user, Theoretical justice amount of CPU resource), and memory source usage amount/min (total memory source demand of user, the theoretical justice of user Memory source amount) maximum, wherein, theoretical justice amount of CPU resource=overall situation cpu resource total amount/number of users is theoretical fair interior Deposit stock number=global memory's total resources/number of users;Wherein, min is represented and is taken minima.
Preferably, it is described to be included than predicting the number of tasks that user run simultaneously according to resource allocation:
When the domination share ratio is cpu resource usage amount/min (total cpu resource demand of user, the theory of user Fair amount of CPU resource) when, the number of tasks that user can run simultaneously=(the CPU moneys of total cpu resource demand-user of user Source usage amount) * (1-CPU resource allocations ratio)/task cpu resource demand;
When the domination share ratio is memory source usage amount/min (total memory source demand of user, the reason of user By fair memory source amount) when, the number of tasks that user can run simultaneously=(total memory source demand-user's of user is interior Deposit resource usage amount) the memory source demand of * (1- memory source distribution ratios)/task
Preferably, the total time T that the operation is completed is:
T=tN/k
Wherein, N is the general assignment number of the user, and t is the deadline of the task of prediction, and k is that the user of prediction can The number of tasks run simultaneously.
The invention allows for a kind of task scheduling apparatus, at least include:
Configuration module, for arranging corresponding cpu resource demand and memory source demand to any one task of user;
Prediction module, for cpu resource demand and memory source demand according to arranging for task, predicts the complete of the task Into the time;The number of tasks that prediction user can run simultaneously so that the domination share of each user is than identical in YARN platforms;
Computing module, the user for the general assignment number according to user, the deadline of the task of prediction and prediction can The number of tasks of operation calculates the total time that operation is completed simultaneously;
Replicated blocks, for, in default cpu resource range of needs and memory source range of needs, changing what is arranged The cpu resource demand and memory source demand of change are sent to prediction module, directly by cpu resource demand and memory source demand The total time completed to calculated operation is minimum.
Preferably, also include:
Generation module, for generating case library, case library is preserved the eigenvalue and the example of example and is provided in different CPU Run time when source demand and memory source demand;
The prediction module, specifically for:
Cpu resource demand and the example of memory source demand all same in lookup case library with the task for arranging; The similarity of the eigenvalue of the eigenvalue and the task of example that calculating finds, by during the operation of the maximum example of similarity Between as the task deadline.
Preferably, the prediction module, specifically for:
Cpu resource demand or memory source demand identical example in lookup case library with the task for arranging.
Preferably, the prediction module, specifically for:
The run time of the default instance in the case library is made according to the task type in the eigenvalue of the task For the deadline of the task.
Preferably, the computing module, specifically for:
The total time that the operation is completed is calculated according to formula T=tN/k;
Wherein, T is the total time that the operation is completed, and N is the general assignment number of the user, and t is the complete of the task of prediction Into the time, k is that the user of prediction can be while the number of tasks run.
Compared with prior art, the present invention includes:Any one task of user is arranged corresponding cpu resource demand and Memory source demand, predicts the deadline of the task;The number of tasks that prediction user can run simultaneously so that in YARN platforms The domination share of each user is than identical;General assignment number, the deadline of the task of prediction and the use of prediction according to user The number of tasks that family can be run simultaneously calculates the total time that operation is completed;The cpu resource demand and memory source for changing setting is needed Ask, repeat the above steps, it is minimum until the total time that calculated operation is completed.By technical scheme, can The subjective bias that user arranges cpu resource demand and memory source demand when YARN platforms submit operation to are reduced, so as to shorten The deadline of operation, there is provided the overall operation efficiency of YARN platforms.
Description of the drawings
Below the accompanying drawing in the embodiment of the present invention is illustrated, the accompanying drawing in embodiment is for entering one to of the invention Step is understood, is used for explaining the present invention, does not constitute limiting the scope of the invention together with description.
Fig. 1 is the flow chart of method for scheduling task proposed by the present invention;
Fig. 2 is the method flow diagram of the deadline of prediction task proposed by the present invention;
Fig. 3 is the composition structure chart of task scheduling apparatus proposed by the present invention.
Specific embodiment
For the ease of the understanding of those skilled in the art, the invention will be further described below in conjunction with the accompanying drawings, not Can be used for limiting the scope of the invention.
Referring to Fig. 1, the present invention proposes a kind of method for scheduling task, including:
Step 100, any one task to user arrange corresponding cpu resource and memory source, predict the task Deadline.
In this step, operation is split into N number of identical task, N by homework uploading to YARN platforms, YARN platforms by user It is the natural number more than or equal to 1.YARN is prior art to the fractionation rule of operation, not within protection scope of the present invention.
Referring in Fig. 2, this step, the method for the deadline of prediction task includes:
Step 200, generation case library, case library are preserved the eigenvalue and the example of example and are needed in different cpu resources Run time during summation memory source demand.
In this step, eigenvalue includes:
(1)Mapping Map/ abbreviation Reduce.
The resource requirement of task is divided into two class of Map and Reduce by the parameter, and Map represents that the memory source demand of task is more In cpu resource demand, Reduce represents that the cpu resource demand of task is more than memory source demand.
(2)Task type.
The parameter represents the specific activity classification of task, such as database retrieval, file ordering etc..
(3)Data volume.
The parameter represents that task needs data scale to be processed.
(4)Complexity.
The parameter is determined by the complexity of estimation task by user.
The eigenvalue of task or example in this step, can be described using numeric form, specifically how to be described, the present invention It is not construed as limiting.It is for instance possible to use 1 bit is describing Map/Reduce, when the bit is 0, the example is represented for Map types, When the bit is 1, represent that the example is Reduce types.
Cpu resource demand and the reality of memory source demand all same in step 201, lookup case library with arranging for task Example.
In this step, need with the cpu resource demand and memory source of the task for arranging in searching less than case library When seeking the example of all same, with the cpu resource demand or memory source demand all same of the task of configuration in lookup case library Example.
With the cpu resource demand or memory source demand all same of the task for arranging in searching less than case library During example, according to the task type of task using the run time of the default instance in case library as the task when completing Between.
The similarity of the eigenvalue of the eigenvalue and task of the example that step 202, calculating find, similarity is maximum Deadline of the run time of example as task.
In this step, the similarity for how calculating the eigenvalue of the eigenvalue and the task of the example for finding belongs to existing There is technology, not within protection scope of the present invention.
Step 101, according to domination share than predicting the number of tasks that can run of user simultaneously so that it is each in YARN platforms The domination share of individual user is than identical.
In this step, ideally, the domination share ratio of each user is identical with theoretical fair resource amount, wherein, reason By the ratio that fair resource amount is total resources utilization rate and number of users;But in actual environment, due to reasons such as scheduling losses, Domination share can be allowed to compare as close possible to theoretical fair resource amount so that each uses use system resource that per family can be fair, Therefore using domination share than weighing the resource service condition of user, and predict that user can be while the number of tasks run.
In this step, cpu resource usage amount/min (total cpu resource demand of user, reason that share ratio is user are arranged By fair amount of CPU resource), and memory source usage amount/min (total memory source demands of user, in theoretical justice of user Deposit stock number) maximum, wherein, theoretical justice CPU(Or internal memory)Stock number=overall situation CPU(Or internal memory)Total resources/user Number, then, when domination share ratio is cpu resource usage amount/min (total cpu resource demand of user, the theoretical justice of user Amount of CPU resource) when, the number of tasks that user can run simultaneously=(cpu resource of total cpu resource demand-user of user makes Consumption) * (1-CPU resource allocations ratio)/task cpu resource demand;When domination share ratio for user memory source usage amount/ During min (total memory source demand of user, theoretical justice memory source amount), the number of tasks that user can run simultaneously=(use The memory source usage amount of total memory source demand-user at family) * (1- memory source distribution ratios)/task memory source Demand;Wherein, min is represented and is taken minima.
The user of step 102, the general assignment number according to user, the deadline of the task of prediction and prediction being capable of same luck Capable number of tasks calculates the total time that operation is completed.
In this step, the total time T that operation is completed is:
T=tN/k
Wherein, N for user general assignment number, t be prediction task deadline, k be prediction user can while The number of tasks of operation.
Step 103, the cpu resource in default cpu resource range of needs and memory source range of needs, changing configuration Demand and memory source demand, repeat the above steps are minimum until the total time that calculated operation is completed.
In this step, default cpu resource range of needs and memory source range of needs are by user rule of thumb determining.
In this step, the total time and the cpu resource demand or memory source demand of configuration completed due to operation is that class is thrown The relation of thing line, therefore, by searching for forward or backwards, can just obtain the CPU moneys configured when total time that operation completes is minimum Source demand and memory source demand.Iterate to calculate after constantly, when deadline and the last calculating of calculated operation As a result, when difference is less than 0.01%, just thinks to obtain the minimum operation deadline, terminate the step.
Referring to Fig. 3, a kind of task scheduling apparatus of also proposition of the present invention, at least include:
Configuration module, for configuring corresponding cpu resource demand and memory source demand to any one task of user;
Prediction module, for the cpu resource demand and memory source demand of the task according to configuration, predicts the complete of the task Into the time;According to resource allocation than predicting the number of tasks that can run of user simultaneously so that each user in YARN platforms Domination share is than identical;
Computing module, the user for the general assignment number according to user, the deadline of the task of prediction and prediction can The number of tasks of operation calculates the total time that operation is completed simultaneously;
Replicated blocks, for, in default cpu resource range of needs and memory source range of needs, changing configuration The cpu resource demand and memory source demand of change are sent to prediction module, directly by cpu resource demand and memory source demand The total time completed to calculated operation is minimum.
In above-mentioned task scheduling apparatus, also include:
Generation module, for generating case library, case library is preserved the eigenvalue and the example of example and is provided in different CPU Run time when source demand and memory source demand;
Prediction module, specifically for:
Cpu resource demand and the example of memory source demand all same in lookup case library with the task of configuration; The similarity of the eigenvalue of the eigenvalue and the task of example that calculating finds, by during the operation of the maximum example of similarity Between as the task deadline.
In above-mentioned task scheduling apparatus, prediction module, specifically for:
Cpu resource demand or memory source demand identical example in lookup case library with the task of configuration.
In above-mentioned task scheduling apparatus, prediction module, specifically for:
According to the task type in the eigenvalue of task using the run time of the default instance in the case library as appoint The deadline of business.
In above-mentioned task scheduling apparatus, computing module, specifically for:
The total time that operation is completed is calculated according to formula T=tN/k;
Wherein, the total time that T is completed for operation, N for user general assignment number, t be prediction task deadline, k For the number of tasks that the user of prediction can run simultaneously.
The present invention can reduce user and cpu resource demand and memory source demand are arranged when YARN platforms submit operation to Subjective bias, so as to shorten the deadline of operation, there is provided the overall operation efficiency of YARN platforms.
It should be noted that embodiment described above is for only for ease of those skilled in the art's understanding, and It is not used in and limits the scope of the invention, on the premise of the inventive concept without departing from the present invention, those skilled in the art couple Any obvious replacement made of the invention and improvement etc. are within protection scope of the present invention.

Claims (11)

1. a kind of method for scheduling task, it is characterised in that the method includes:
Corresponding cpu resource demand and memory source demand are arranged to any one task of user, and predicts the complete of the task Into the time;
The number of tasks that prediction user can run simultaneously so that the domination share of each user is than identical in YARN platforms;
The number of tasks meter that general assignment number, the deadline of task of prediction and the user of prediction according to user can be run simultaneously Can be regarded as the total time that industry is completed;
In default cpu resource range of needs and memory source range of needs, change the cpu resource demand and internal memory money for arranging Source demand, repeat the above steps are minimum until the total time that calculated operation is completed;
The deadline of the prediction task includes:
Case library is generated, case library preserves the eigenvalue and the example of example in different cpu resource demands and memory source Run time during demand;
Cpu resource demand and the example of memory source demand all same in lookup case library with the task for arranging;
The similarity of the eigenvalue of the eigenvalue and the task of the example that calculating finds, by the fortune of the maximum example of similarity Deadline of the row time as the task.
2. method for scheduling task according to claim 1, with the task for arranging in searching less than the case library Cpu resource demand and memory source demand all same example when, the method also includes:
Cpu resource demand or memory source demand identical example in lookup case library with the task for arranging.
3. method for scheduling task according to claim 1, the eigenvalue include mapping Map/ abbreviation Reduce, task class Type, data volume and complexity.
4. method for scheduling task according to claim 3, with the task for arranging in searching less than the case library Cpu resource demand or memory source demand all same example when, the method also includes:
According to the task type of the task using the run time of the default instance in the case library as the complete of the task Into the time.
5. method for scheduling task according to claim 1, cpu resource usage amount/min of the domination share ratio for user (total cpu resource demand of user, theoretical justice amount of CPU resource), and the memory source usage amount/min of user (user's is total Memory source demand, theoretical justice memory source amount) maximum, wherein, theoretical justice amount of CPU resource=overall situation CPU is provided Source total amount/number of users, theoretical justice memory source amount=global memory's total resources/number of users;Wherein, min is represented and is taken minimum Value.
6. method for scheduling task according to claim 5, described to be run simultaneously than prediction user according to resource allocation Number of tasks include:
When the domination share ratio is cpu resource usage amount/min (total cpu resource demand of user, the theoretical justice of user Amount of CPU resource) when, the number of tasks that user can run simultaneously=(cpu resource of total cpu resource demand-user of user makes Consumption) * (1-CPU resource allocations ratio)/task cpu resource demand;
When the domination share ratio is memory source usage amount/min (total memory source demand of user, the theoretical public affairs of user Flat memory source amount) when, the number of tasks that user can the run simultaneously=(internal memory of total memory source demand-user of user Resource usage amount) * (1- memory source distribution ratios)/task memory source demand.
7. method for scheduling task according to claim 1, the total time T that the operation is completed is:
T=tN/k
Wherein, N is the general assignment number of the user, and t is the deadline of the task of prediction, k be prediction user can while The number of tasks of operation.
8. a kind of task scheduling apparatus, it is characterised in that at least include:
Configuration module, for arranging corresponding cpu resource demand and memory source demand to any one task of user;
Prediction module, for cpu resource demand and memory source demand according to arranging for task, predicts when completing of the task Between;The number of tasks that prediction user can run simultaneously so that the domination share of each user is than identical in YARN platforms;
Computing module, the user for the general assignment number according to user, the deadline of the task of prediction and prediction can be simultaneously The number of tasks of operation calculates the total time that operation is completed;
Replicated blocks, in default cpu resource range of needs and memory source range of needs, changing the CPU moneys for arranging The cpu resource demand and memory source demand of change are sent to prediction module by source demand and memory source demand, until calculating The total time that the operation for obtaining is completed is minimum;
Also include:
Generation module, for generating case library, case library is preserved the eigenvalue and the example of example and is needed in different cpu resources Run time during summation memory source demand;
The prediction module, specifically for:
Cpu resource demand and the example of memory source demand all same in lookup case library with the task for arranging;Calculate The similarity of the eigenvalue of the eigenvalue of the example for finding and the task, the run time of the maximum example of similarity is made For the deadline of the task.
9. task scheduling apparatus according to claim 8, it is characterised in that the prediction module, specifically for:
Cpu resource demand or memory source demand identical example in lookup case library with the task for arranging.
10. task scheduling apparatus according to claim 8, it is characterised in that the prediction module, specifically for:
According to the task type in the eigenvalue of the task using the run time of the default instance in the case library as institute State the deadline of task.
11. task scheduling apparatus according to claim 8, it is characterised in that the computing module, specifically for:
The total time that the operation is completed is calculated according to formula T=tN/k;
Wherein, T is the total time that the operation is completed, and N is the general assignment number of the user, and t is when the completing of task of prediction Between, k is that the user of prediction can be while the number of tasks run.
CN201310750655.6A 2013-12-31 2013-12-31 Method and device for dispatching tasks Active CN103729246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310750655.6A CN103729246B (en) 2013-12-31 2013-12-31 Method and device for dispatching tasks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310750655.6A CN103729246B (en) 2013-12-31 2013-12-31 Method and device for dispatching tasks

Publications (2)

Publication Number Publication Date
CN103729246A CN103729246A (en) 2014-04-16
CN103729246B true CN103729246B (en) 2017-05-03

Family

ID=50453329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310750655.6A Active CN103729246B (en) 2013-12-31 2013-12-31 Method and device for dispatching tasks

Country Status (1)

Country Link
CN (1) CN103729246B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293890B (en) * 2015-06-09 2019-11-05 阿里巴巴集团控股有限公司 A kind of method and device for business processing based on complexity
CN105786992A (en) * 2016-02-17 2016-07-20 中国建设银行股份有限公司 Data query method and device used for online transaction
CN105843687A (en) * 2016-03-31 2016-08-10 乐视控股(北京)有限公司 Method and device for quantifying task resource
CN108268317B (en) * 2016-12-30 2020-07-28 华为技术有限公司 Resource allocation method and device
CN108270603A (en) * 2016-12-31 2018-07-10 中国移动通信集团陕西有限公司 A kind of dispatching method and management system
CN106790636A (en) * 2017-01-09 2017-05-31 上海承蓝科技股份有限公司 A kind of equally loaded system and method for cloud computing server cluster
CN108132840B (en) * 2017-11-16 2021-12-03 浙江工商大学 Resource scheduling method and device in distributed system
CN108173905B (en) * 2017-12-07 2021-06-18 北京奇艺世纪科技有限公司 Resource allocation method and device and electronic equipment
CN108536528A (en) * 2018-03-23 2018-09-14 湖南大学 Using the extensive network job scheduling method of perception
CN110389816B (en) * 2018-04-20 2023-05-23 伊姆西Ip控股有限责任公司 Method, apparatus and computer readable medium for resource scheduling
CN109271251A (en) * 2018-08-09 2019-01-25 深圳市瑞云科技有限公司 A method of by constraining come scheduling node machine
TWI724531B (en) * 2019-09-05 2021-04-11 財團法人資訊工業策進會 Equipment and method for assigning services
CN111338791A (en) * 2020-02-12 2020-06-26 平安科技(深圳)有限公司 Method, device and equipment for scheduling cluster queue resources and storage medium
CN111625352A (en) * 2020-05-18 2020-09-04 杭州数澜科技有限公司 Scheduling method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019855A (en) * 2012-11-21 2013-04-03 北京航空航天大学 Method for forecasting executive time of Map Reduce operation
CN103064664A (en) * 2012-11-28 2013-04-24 华中科技大学 Hadoop parameter automatic optimization method and system based on performance pre-evaluation
CN103279390A (en) * 2012-08-21 2013-09-04 中国科学院信息工程研究所 Parallel processing system for small operation optimizing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9178935B2 (en) * 2009-03-05 2015-11-03 Paypal, Inc. Distributed steam processing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279390A (en) * 2012-08-21 2013-09-04 中国科学院信息工程研究所 Parallel processing system for small operation optimizing
CN103019855A (en) * 2012-11-21 2013-04-03 北京航空航天大学 Method for forecasting executive time of Map Reduce operation
CN103064664A (en) * 2012-11-28 2013-04-24 华中科技大学 Hadoop parameter automatic optimization method and system based on performance pre-evaluation

Also Published As

Publication number Publication date
CN103729246A (en) 2014-04-16

Similar Documents

Publication Publication Date Title
CN103729246B (en) Method and device for dispatching tasks
CN110737529B (en) Short-time multi-variable-size data job cluster scheduling adaptive configuration method
CN105656973B (en) Method for scheduling task and system in a kind of distributed node group
CN104331321B (en) Cloud computing task scheduling method based on tabu search and load balancing
CN105159762B (en) Heuristic cloud computing method for scheduling task based on Greedy strategy
Jung et al. Synchronous parallel processing of big-data analytics services to optimize performance in federated clouds
CN106991006B (en) Support the cloud workflow task clustering method relied on and the time balances
Kumar et al. Maximizing business value by optimal assignment of jobs to resources in grid computing
Nguyen et al. A hybrid scheduling algorithm for data intensive workloads in a mapreduce environment
Wieder et al. Brief announcement: modelling mapreduce for optimal execution in the cloud
CN108182109A (en) Workflow schedule and data distributing method under a kind of cloud environment
CN107329815A (en) A kind of cloud task load equalization scheduling method searched for based on BP Tabu
CN103677990B (en) Dispatching method, device and the virtual machine of virtual machine real-time task
CN103593323A (en) Machine learning method for Map Reduce task resource allocation parameters
CN106371924B (en) A kind of method for scheduling task minimizing MapReduce cluster energy consumption
CN105373426B (en) A kind of car networking memory aware real time job dispatching method based on Hadoop
CN108427602B (en) Distributed computing task cooperative scheduling method and device
CN114996018A (en) Resource scheduling method, node, system, device and medium for heterogeneous computing
CN105389204A (en) Multiple-resource partial order scheduling policy
CN104917839A (en) Load balancing method for use in cloud computing environment
CN109298920A (en) Based on the quasi- mixing key task dispatching method for dividing thought
CN110502334A (en) Bandwidth aware task stealing method, system and chip based on mixing memory architecture
Chen et al. Research on workflow scheduling algorithms in the cloud
CN110958192B (en) Virtual data center resource allocation system and method based on virtual switch
CN108958919A (en) More DAG task schedule expense fairness assessment models of limited constraint in a kind of cloud computing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant