CN103729246A - Method and device for dispatching tasks - Google Patents
Method and device for dispatching tasks Download PDFInfo
- Publication number
- CN103729246A CN103729246A CN201310750655.6A CN201310750655A CN103729246A CN 103729246 A CN103729246 A CN 103729246A CN 201310750655 A CN201310750655 A CN 201310750655A CN 103729246 A CN103729246 A CN 103729246A
- Authority
- CN
- China
- Prior art keywords
- task
- user
- cpu resource
- demand
- memory source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention provides a method for dispatching tasks. The method comprises the steps that a corresponding CPU resource demand and a corresponding internal storage resource demand are set for any task of a user, and the time when the task is finished is predicted; the number of tasks operated by the user at the same time is predicted so that the dominated share ratios of all users in a YARN platform are identical; the total time spent on a job is calculated according to the number of the total tasks of the user, the predicted time when the tasks are finished and the number of the tasks capable of being operated by the user at the same time; the set CPU resource demand and the internal storage resource demand are changed within the preset CPU resource demand range and the internal storage resource demand range, and the steps are executed repeatedly until the worked out total time spent on the operation is finished is minimum. The subjective deviation of setting the CPU resource demand and the internal storage resource demand of the user when the YARN platform submits the job can be reduced, the job finishing time can be shortened, and the overall operating efficiency of the YARN platform is improved.
Description
Technical field
The optimization of job technology that the present invention relates to YARN platform in 2nd generation Hadoop, relates in particular to a kind of method for scheduling task and device.
Background technology
Hadoop is current most popular large data processing tools.It has realized a set of distributed storage and computing system, be particularly suitable for terabyte (TB, Terabyte), PB(Petabyte) data processing of level, and by means of MapReduce framework, can make user easily write out distributed program, traditional business is moved on distributed platform.
Mostly current business-like product is that the technology based on 1st generation Hadoop realizes, and in use exists "bottleneck", resource to distribute the problems such as dumb, programming framework is single.The 2nd generation Hadoop researching and developing, for overcoming above-mentioned shortcoming, has built new underlying platform YARN, is responsible for the resource of cluster and distributes and task scheduling; YARN peels off MapReduce framework becomes independently optional components, is no longer coupled with platform.On new scheduling of resource platform YARN, the dispatching algorithm of employing is " domination resource is fair " (DRF, Dominant Resource Fairness).The program that user is submitted to is called operation, and each operation will be split into considerable task operation.User when submit job, the cpu resource taking in the time of need to specifying each task run and memory source.Ratio and memory source that the cpu resource that scheduler on YARN platform can calculate each task successively accounts for CPU total resources account for the ratio of internal memory total resources, and get the large person of its ratio as " domination share ratio " (the Dominant Share) of such subtask.When a plurality of users submit a plurality of operation to simultaneously, scheduler can optionally start all kinds of subtasks, with total domination share of guaranteeing each user than identical.
In above-mentioned dispatching algorithm, user, when submit job, need to arrange cpu resource demand and memory source demand, and the cpu resource demand of setting or memory source demand directly have influence on the deadline of operation.The cpu resource demand or the memory source demand that arrange are larger, and each Runtime is shorter, but is subject to the restriction of dispatching platforms device, and the number of tasks of operation can reduce simultaneously; The cpu resource demand or the memory source demand that arrange are fewer, and each Runtime is longer, but the task quantity of operation can become many simultaneously.Therefore, resource requirement and task deadline and the concurrent number of tasks of platform are not all linear relationships.Yet arranging of parameter mainly depends on user experience at present, the user for lacking rich experiences, can not guarantee rational parameter setting, thereby cause the job run time to be obviously greater than theoretical optimal value.
Summary of the invention
In order to solve the problems of the technologies described above, the present invention proposes a kind of method for scheduling task and device, can eliminate the subjectivity that cpu resource demand and memory source demand are set, thereby shorten the operation deadline.
In order to achieve the above object, the present invention proposes a kind of method for scheduling task, the method comprises:
Any one task of user is arranged to corresponding cpu resource demand and memory source demand, and predict the deadline of this task;
The number of tasks that predictive user can be moved simultaneously, makes the domination share of each user in YARN platform than identical;
The T.T. that the number of tasks computational tasks that can simultaneously move according to the deadline of task of user's general assignment number, prediction and the user of prediction completes;
In default cpu resource range of needs and memory source range of needs, change the cpu resource demand and the memory source demand that arrange, repeat above-mentioned steps, until the T.T. that the operation calculating completes is minimum.
Preferably, the deadline of described prediction task comprises:
Generate case library, case library is preserved eigenwert and the working time of this example when different cpu resource demands and memory source demand of example;
Search in case library and the equal identical example of the cpu resource demand of the described task arranging and memory source demand;
The similarity of the eigenwert of the example that calculating finds and the eigenwert of described task, the deadline using the working time of the example of similarity maximum as described task.
Preferably, in searching less than described case library, with the cpu resource demand of the described task arranging and memory source demand all during identical example, the method also comprises:
Search in case library the example identical with the cpu resource demand of the described task arranging or memory source demand.
Preferably, described eigenwert comprises mapping Map/ abbreviation Reduce, task type, data volume and complexity.
Preferably, in searching less than described case library, with the cpu resource demand of the described task arranging or memory source demand all during identical example, the method also comprises:
Deadline according to the task type of described task using the working time of the default instance in described case library as described task.
Preferably, described domination share is than being cpu resource use amount/min of user (total cpu resource demand of user, theoretical fair cpu resource amount), memory source use amount/min (total memory source demand of user with user, theoretical fair memory source amount) maximal value, wherein, theoretical fair cpu resource amount=overall cpu resource total amount/number of users, theoretical fair memory source amount=global memory total resources/number of users; Wherein, min represents to get minimum value.
Preferably, the described number of tasks that can simultaneously move according to resource distribution ratio predictive user comprises:
When described domination share is than being cpu resource use amount/min of user (total cpu resource demand of user, theoretical fair cpu resource amount) time, the cpu resource demand of the number of tasks that user can move simultaneously=(total cpu resource demand-user's of user cpu resource use amount) * (1-CPU resource distribution ratio)/task;
When described domination share is than being memory source use amount/min of user (total memory source demand of user, theoretical fair memory source amount) time, the memory source demand of the number of tasks that user can move simultaneously=(total memory source demand-user's of user memory source use amount) * (1-memory source distribution ratio)/task
Preferably, the T.T. T that described operation completes is:
T=tN/k
Wherein, the general assignment number that N is described user, t is the deadline of the task of prediction, the number of tasks that the user that k is prediction can move simultaneously.
The invention allows for a kind of task scheduling apparatus, at least comprise:
Configuration module, for arranging corresponding cpu resource demand and memory source demand to any one task of user;
Prediction module, for according to cpu resource demand and the memory source demand of arranging of task, predicts the deadline of this task; The number of tasks that predictive user can be moved simultaneously, makes the domination share of each user in YARN platform than identical;
Computing module, the T.T. completing for the number of tasks computational tasks that can move according to the deadline of task of user's general assignment number, prediction and the user of prediction simultaneously;
Replicated blocks, for in default cpu resource range of needs and memory source range of needs, change the cpu resource demand and the memory source demand that arrange, the cpu resource demand of change and memory source demand are sent to prediction module, until the T.T. that the operation calculating completes is minimum.
Preferably, also comprise:
Generation module, for generating case library, case library is preserved eigenwert and the working time of this example when different cpu resource demands and memory source demand of example;
Described prediction module, specifically for:
Search in case library and the equal identical example of the cpu resource demand of the described task arranging and memory source demand; The similarity of the eigenwert of the example that calculating finds and the eigenwert of described task, the deadline using the working time of the example of similarity maximum as described task.
Preferably, described prediction module, specifically for:
Search in case library the example identical with the cpu resource demand of the described task arranging or memory source demand.
Preferably, described prediction module, specifically for:
Deadline according to the task type in the eigenwert of described task using the working time of the default instance in described case library as described task.
Preferably, described computing module, specifically for:
According to formula T=tN/k, calculate the T.T. that described operation completes;
Wherein, T is the T.T. that described operation completes, the general assignment number that N is described user, and t is the deadline of the task of prediction, the number of tasks that the user that k is prediction can move simultaneously.
Compared with prior art, the present invention includes: any one task of user is arranged to corresponding cpu resource demand and memory source demand, predict the deadline of this task; The number of tasks that predictive user can be moved simultaneously, makes the domination share of each user in YARN platform than identical; The T.T. that the number of tasks computational tasks that can simultaneously move according to the deadline of task of user's general assignment number, prediction and the user of prediction completes; Change the cpu resource demand and the memory source demand that arrange, repeat above-mentioned steps, until the T.T. that the operation calculating completes is minimum.By technical scheme of the present invention, can reduce user the subjective deviation of cpu resource demand and memory source demand is set when YARN platform submit job, thereby shorten deadline of operation, the overall operation efficiency of YARN platform is provided.
Accompanying drawing explanation
Below the accompanying drawing in the embodiment of the present invention is described, the accompanying drawing in embodiment is for a further understanding of the present invention, is used from explanation the present invention with instructions one, does not form limiting the scope of the invention.
Fig. 1 is the process flow diagram of the method for scheduling task that proposes of the present invention;
Fig. 2 is the method flow diagram of the deadline of the prediction task that proposes of the present invention;
Fig. 3 is the composition structural drawing of the task scheduling apparatus that proposes of the present invention.
Embodiment
For the ease of those skilled in the art's understanding, below in conjunction with accompanying drawing, the invention will be further described, can not be used for limiting the scope of the invention.
Referring to Fig. 1, the present invention proposes a kind of method for scheduling task, comprising:
In this step, user uploads to YARN platform by operation, and YARN platform splits into N identical task by operation, and N is more than or equal to 1 natural number.YARN is prior art to the fractionation rule of operation, not within protection scope of the present invention.
Referring to Fig. 2, in this step, the method for the deadline of prediction task comprises:
In this step, eigenwert comprises:
(1) mapping Map/ abbreviation Reduce.
This parameter is divided into Map and Reduce two classes by the resource requirement of task, and Map represents that the memory source demand of task is more than cpu resource demand, and Reduce represents that the cpu resource demand of task is more than memory source demand.
(2) task type.
The activity classification that this Parametric Representation task is concrete, such as database retrieval, file ordering etc.
(3) data volume.
This Parametric Representation task needs data scale to be processed.
(4) complexity.
This parameter is determined by estimating the complexity of task by user.
In this step, can adopt numerical value form to carry out the eigenwert of description task or example, specifically how describe, the present invention is not construed as limiting.For example, can adopt 1 bit to describe Map/Reduce, this bit is 0 o'clock, represents that this example is Map type, and this bit is 1 o'clock, represents that this example is Reduce type.
In this step, in searching less than case library, with the cpu resource demand of the described task arranging and memory source demand all during identical example, search in case library and the equal identical example of the cpu resource demand of the task of configuration or memory source demand.
In searching less than case library with the cpu resource demand of the described task arranging or memory source demand all during identical example, the deadline according to the task type of task using the working time of the default instance in case library as described task.
The eigenwert of example that step 202, calculating find and the similarity of the eigenwert of task, the deadline using the working time of the example of similarity maximum as task.
In this step, similarity how to calculate the eigenwert of the example finding and the eigenwert of described task belongs to prior art, not within protection scope of the present invention.
In this step, ideally, each user's domination share is than identical with theoretical fair stock number, and wherein, theoretical fair stock number is the ratio of total resources utilization rate and number of users; But in actual environment, due to reasons such as scheduling losses, can only allow domination share approach as far as possible theoretical fair stock number frequently, make each by use system resource that per family can be fair, therefore adopt domination share recently to weigh user's resource service condition, and the number of tasks that can simultaneously move of predictive user.
In this step, domination share is than being cpu resource use amount/min of user (total cpu resource demand of user, theoretical fair cpu resource amount), memory source use amount/min (total memory source demand of user with user, theoretical fair memory source amount) maximal value, wherein, theoretical fair CPU(or internal memory) stock number=overall CPU(or internal memory) total resources/number of users, so, when domination share is than being cpu resource use amount/min of user (total cpu resource demand of user, theoretical fair cpu resource amount) time, the cpu resource demand of the number of tasks that user can move simultaneously=(total cpu resource demand-user's of user cpu resource use amount) * (1-CPU resource distribution ratio)/task, when domination share is than being memory source use amount/min of user (total memory source demand of user, theoretical fair memory source amount) time, the memory source demand of the number of tasks that user can move simultaneously=(total memory source demand-user's of user memory source use amount) * (1-memory source distribution ratio)/task, wherein, min represents to get minimum value.
The T.T. that step 102, the number of tasks computational tasks that can simultaneously move according to the deadline of task of user's general assignment number, prediction and the user of prediction complete.
In this step, the T.T. T that operation completes is:
T=tN/k
Wherein, the general assignment number that N is user, t is the deadline of the task of prediction, the number of tasks that the user that k is prediction can move simultaneously.
In this step, default cpu resource range of needs and memory source range of needs are rule of thumb come to determine by user.
In this step, the T.T. completing due to operation and cpu resource demand or the memory source demand of configuration are the parabolical relations of class, therefore,, by searching for forward or backwards, just can obtain cpu resource demand and the memory source demand of hour configuration of T.T. that operation completes.After iterative computation constantly, when the deadline of the operation calculating and the difference of last result of calculation are less than 0.01%, just think and obtain the minimum operation deadline, finish this step.
Referring to Fig. 3, of the present invention a kind of task scheduling apparatus also proposed, at least comprise:
Configuration module, for configuring corresponding cpu resource demand and memory source demand to any one task of user;
Prediction module, for according to cpu resource demand and the memory source demand of the task of configuration, predicts the deadline of this task; The number of tasks that can simultaneously move according to resource distribution ratio predictive user, makes the domination share of each user in YARN platform than identical;
Computing module, the T.T. completing for the number of tasks computational tasks that can move according to the deadline of task of user's general assignment number, prediction and the user of prediction simultaneously;
Replicated blocks, for in default cpu resource range of needs and memory source range of needs, the cpu resource demand and the memory source demand that change configuration, send to prediction module by the cpu resource demand of change and memory source demand, until the T.T. that the operation calculating completes is minimum.
In above-mentioned task scheduling apparatus, also comprise:
Generation module, for generating case library, case library is preserved eigenwert and the working time of this example when different cpu resource demands and memory source demand of example;
Prediction module, specifically for:
Search in case library and the equal identical example of the cpu resource demand of the described task of configuration and memory source demand; The similarity of the eigenwert of the example that calculating finds and the eigenwert of described task, the deadline using the working time of the example of similarity maximum as described task.
In above-mentioned task scheduling apparatus, prediction module, specifically for:
Search in case library the example identical with the cpu resource demand of the task of configuration or memory source demand.
In above-mentioned task scheduling apparatus, prediction module, specifically for:
Deadline according to the task type in the eigenwert of task using the working time of the default instance in described case library as task.
In above-mentioned task scheduling apparatus, computing module, specifically for:
The T.T. completing according to formula T=tN/k computational tasks;
Wherein, T is the T.T. that operation completes, the general assignment number that N is user, and t is the deadline of the task of prediction, the number of tasks that the user that k is prediction can move simultaneously.
The present invention can reduce user the subjective deviation of cpu resource demand and memory source demand is set when YARN platform submit job, thereby shortens the deadline of operation, and the overall operation efficiency of YARN platform is provided.
It should be noted that; above-described embodiment understands for the ease of those skilled in the art; be not limited to protection scope of the present invention; do not departing under the prerequisite of inventive concept of the present invention, any apparent replacement that those skilled in the art make the present invention and improvement etc. are all within protection scope of the present invention.
Claims (13)
1. a method for scheduling task, is characterized in that, the method comprises:
Any one task of user is arranged to corresponding cpu resource demand and memory source demand, and predict the deadline of this task;
The number of tasks that predictive user can be moved simultaneously, makes the domination share of each user in YARN platform than identical;
The T.T. that the number of tasks computational tasks that can simultaneously move according to the deadline of task of user's general assignment number, prediction and the user of prediction completes;
In default cpu resource range of needs and memory source range of needs, change the cpu resource demand and the memory source demand that arrange, repeat above-mentioned steps, until the T.T. that the operation calculating completes is minimum.
2. method for scheduling task according to claim 1, the deadline of described prediction task comprises:
Generate case library, case library is preserved eigenwert and the working time of this example when different cpu resource demands and memory source demand of example;
Search in case library and the equal identical example of the cpu resource demand of the described task arranging and memory source demand;
The similarity of the eigenwert of the example that calculating finds and the eigenwert of described task, the deadline using the working time of the example of similarity maximum as described task.
3. method for scheduling task according to claim 2, in searching less than described case library, with the cpu resource demand of the described task arranging and memory source demand all during identical example, the method also comprises:
Search in case library the example identical with the cpu resource demand of the described task arranging or memory source demand.
4. method for scheduling task according to claim 2, described eigenwert comprises mapping Map/ abbreviation Reduce, task type, data volume and complexity.
5. method for scheduling task according to claim 4, in searching less than described case library, with the cpu resource demand of the described task arranging or memory source demand all during identical example, the method also comprises:
Deadline according to the task type of described task using the working time of the default instance in described case library as described task.
6. method for scheduling task according to claim 1, described domination share is than being cpu resource use amount/min of user (total cpu resource demand of user, theoretical fair cpu resource amount), memory source use amount/min (total memory source demand of user with user, theoretical fair memory source amount) maximal value, wherein, theoretical fair cpu resource amount=overall cpu resource total amount/number of users, theoretical fair memory source amount=global memory total resources/number of users; Wherein, min represents to get minimum value.
7. method for scheduling task according to claim 6, the described number of tasks that can simultaneously move according to resource distribution ratio predictive user comprises:
When described domination share is than being cpu resource use amount/min of user (total cpu resource demand of user, theoretical fair cpu resource amount) time, the cpu resource demand of the number of tasks that user can move simultaneously=(total cpu resource demand-user's of user cpu resource use amount) * (1-CPU resource distribution ratio)/task;
When described domination share is than being memory source use amount/min of user (total memory source demand of user, theoretical fair memory source amount) time, the memory source demand of the number of tasks that user can move simultaneously=(total memory source demand-user's of user memory source use amount) * (1-memory source distribution ratio)/task.
8. method for scheduling task according to claim 1, the T.T. T that described operation completes is:
T=tN/k
Wherein, the general assignment number that N is described user, t is the deadline of the task of prediction, the number of tasks that the user that k is prediction can move simultaneously.
9. a task scheduling apparatus, is characterized in that, at least comprises:
Configuration module, for arranging corresponding cpu resource demand and memory source demand to any one task of user;
Prediction module, for according to cpu resource demand and the memory source demand of arranging of task, predicts the deadline of this task; The number of tasks that predictive user can be moved simultaneously, makes the domination share of each user in YARN platform than identical;
Computing module, the T.T. completing for the number of tasks computational tasks that can move according to the deadline of task of user's general assignment number, prediction and the user of prediction simultaneously;
Replicated blocks, for in default cpu resource range of needs and memory source range of needs, change the cpu resource demand and the memory source demand that arrange, the cpu resource demand of change and memory source demand are sent to prediction module, until the T.T. that the operation calculating completes is minimum.
10. task scheduling apparatus according to claim 9, is characterized in that, also comprises:
Generation module, for generating case library, case library is preserved eigenwert and the working time of this example when different cpu resource demands and memory source demand of example;
Described prediction module, specifically for:
Search in case library and the equal identical example of the cpu resource demand of the described task arranging and memory source demand; The similarity of the eigenwert of the example that calculating finds and the eigenwert of described task, the deadline using the working time of the example of similarity maximum as described task.
11. task scheduling apparatus according to claim 10, is characterized in that, described prediction module, specifically for:
Search in case library the example identical with the cpu resource demand of the described task arranging or memory source demand.
12. task scheduling apparatus according to claim 10, is characterized in that, described prediction module, specifically for:
Deadline according to the task type in the eigenwert of described task using the working time of the default instance in described case library as described task.
13. task scheduling apparatus according to claim 9, is characterized in that, described computing module, specifically for:
According to formula T=tN/k, calculate the T.T. that described operation completes;
Wherein, T is the T.T. that described operation completes, the general assignment number that N is described user, and t is the deadline of the task of prediction, the number of tasks that the user that k is prediction can move simultaneously.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310750655.6A CN103729246B (en) | 2013-12-31 | 2013-12-31 | Method and device for dispatching tasks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310750655.6A CN103729246B (en) | 2013-12-31 | 2013-12-31 | Method and device for dispatching tasks |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103729246A true CN103729246A (en) | 2014-04-16 |
CN103729246B CN103729246B (en) | 2017-05-03 |
Family
ID=50453329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310750655.6A Active CN103729246B (en) | 2013-12-31 | 2013-12-31 | Method and device for dispatching tasks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103729246B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105786992A (en) * | 2016-02-17 | 2016-07-20 | 中国建设银行股份有限公司 | Data query method and device used for online transaction |
CN105843687A (en) * | 2016-03-31 | 2016-08-10 | 乐视控股(北京)有限公司 | Method and device for quantifying task resource |
CN106293890A (en) * | 2015-06-09 | 2017-01-04 | 阿里巴巴集团控股有限公司 | A kind of method and device for business processing based on complexity |
CN106790636A (en) * | 2017-01-09 | 2017-05-31 | 上海承蓝科技股份有限公司 | A kind of equally loaded system and method for cloud computing server cluster |
CN108132840A (en) * | 2017-11-16 | 2018-06-08 | 浙江工商大学 | Resource regulating method and device in a kind of distributed system |
CN108173905A (en) * | 2017-12-07 | 2018-06-15 | 北京奇艺世纪科技有限公司 | A kind of resource allocation method, device and electronic equipment |
CN108268317A (en) * | 2016-12-30 | 2018-07-10 | 华为技术有限公司 | A kind of resource allocation methods and device |
CN108270603A (en) * | 2016-12-31 | 2018-07-10 | 中国移动通信集团陕西有限公司 | A kind of dispatching method and management system |
CN108536528A (en) * | 2018-03-23 | 2018-09-14 | 湖南大学 | Using the extensive network job scheduling method of perception |
CN109271251A (en) * | 2018-08-09 | 2019-01-25 | 深圳市瑞云科技有限公司 | A method of by constraining come scheduling node machine |
CN110389816A (en) * | 2018-04-20 | 2019-10-29 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for scheduling of resource |
CN111625352A (en) * | 2020-05-18 | 2020-09-04 | 杭州数澜科技有限公司 | Scheduling method, device and storage medium |
CN112445589A (en) * | 2019-09-05 | 2021-03-05 | 财团法人资讯工业策进会 | Service dispatching equipment and method |
WO2021159638A1 (en) * | 2020-02-12 | 2021-08-19 | 平安科技(深圳)有限公司 | Method, apparatus and device for scheduling cluster queue resources, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100306286A1 (en) * | 2009-03-05 | 2010-12-02 | Chi-Hsien Chiu | Distributed steam processing |
CN103019855A (en) * | 2012-11-21 | 2013-04-03 | 北京航空航天大学 | Method for forecasting executive time of Map Reduce operation |
CN103064664A (en) * | 2012-11-28 | 2013-04-24 | 华中科技大学 | Hadoop parameter automatic optimization method and system based on performance pre-evaluation |
CN103279390A (en) * | 2012-08-21 | 2013-09-04 | 中国科学院信息工程研究所 | Parallel processing system for small operation optimizing |
-
2013
- 2013-12-31 CN CN201310750655.6A patent/CN103729246B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100306286A1 (en) * | 2009-03-05 | 2010-12-02 | Chi-Hsien Chiu | Distributed steam processing |
CN103279390A (en) * | 2012-08-21 | 2013-09-04 | 中国科学院信息工程研究所 | Parallel processing system for small operation optimizing |
CN103019855A (en) * | 2012-11-21 | 2013-04-03 | 北京航空航天大学 | Method for forecasting executive time of Map Reduce operation |
CN103064664A (en) * | 2012-11-28 | 2013-04-24 | 华中科技大学 | Hadoop parameter automatic optimization method and system based on performance pre-evaluation |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106293890B (en) * | 2015-06-09 | 2019-11-05 | 阿里巴巴集团控股有限公司 | A kind of method and device for business processing based on complexity |
CN106293890A (en) * | 2015-06-09 | 2017-01-04 | 阿里巴巴集团控股有限公司 | A kind of method and device for business processing based on complexity |
CN105786992A (en) * | 2016-02-17 | 2016-07-20 | 中国建设银行股份有限公司 | Data query method and device used for online transaction |
CN105843687A (en) * | 2016-03-31 | 2016-08-10 | 乐视控股(北京)有限公司 | Method and device for quantifying task resource |
CN108268317B (en) * | 2016-12-30 | 2020-07-28 | 华为技术有限公司 | Resource allocation method and device |
CN108268317A (en) * | 2016-12-30 | 2018-07-10 | 华为技术有限公司 | A kind of resource allocation methods and device |
CN108270603A (en) * | 2016-12-31 | 2018-07-10 | 中国移动通信集团陕西有限公司 | A kind of dispatching method and management system |
CN106790636A (en) * | 2017-01-09 | 2017-05-31 | 上海承蓝科技股份有限公司 | A kind of equally loaded system and method for cloud computing server cluster |
CN108132840A (en) * | 2017-11-16 | 2018-06-08 | 浙江工商大学 | Resource regulating method and device in a kind of distributed system |
CN108173905A (en) * | 2017-12-07 | 2018-06-15 | 北京奇艺世纪科技有限公司 | A kind of resource allocation method, device and electronic equipment |
CN108536528A (en) * | 2018-03-23 | 2018-09-14 | 湖南大学 | Using the extensive network job scheduling method of perception |
CN110389816A (en) * | 2018-04-20 | 2019-10-29 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for scheduling of resource |
CN109271251A (en) * | 2018-08-09 | 2019-01-25 | 深圳市瑞云科技有限公司 | A method of by constraining come scheduling node machine |
CN112445589A (en) * | 2019-09-05 | 2021-03-05 | 财团法人资讯工业策进会 | Service dispatching equipment and method |
TWI724531B (en) * | 2019-09-05 | 2021-04-11 | 財團法人資訊工業策進會 | Equipment and method for assigning services |
WO2021159638A1 (en) * | 2020-02-12 | 2021-08-19 | 平安科技(深圳)有限公司 | Method, apparatus and device for scheduling cluster queue resources, and storage medium |
CN111625352A (en) * | 2020-05-18 | 2020-09-04 | 杭州数澜科技有限公司 | Scheduling method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN103729246B (en) | 2017-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103729246A (en) | Method and device for dispatching tasks | |
CN107659433B (en) | Cloud resource scheduling method and equipment | |
Samadi et al. | E-HEFT: enhancement heterogeneous earliest finish time algorithm for task scheduling based on load balancing in cloud computing | |
CN104331321B (en) | Cloud computing task scheduling method based on tabu search and load balancing | |
CN110737529A (en) | cluster scheduling adaptive configuration method for short-time multiple variable-size data jobs | |
CN102521056B (en) | Task allocation device and task allocation method | |
CN102508639B (en) | Distributed parallel processing method based on satellite remote sensing data characteristics | |
CN103593323A (en) | Machine learning method for Map Reduce task resource allocation parameters | |
CN110231986B (en) | Multi-FPGA-based dynamically reconfigurable multi-task scheduling and placing method | |
CN106991006B (en) | Support the cloud workflow task clustering method relied on and the time balances | |
CN111861412B (en) | Completion time optimization-oriented scientific workflow scheduling method and system | |
CN101710286A (en) | Parallel programming model system of DAG oriented data driving type application and realization method | |
CN101833439B (en) | Parallel computing hardware structure based on separation and combination thought | |
CN103685492B (en) | Dispatching method, dispatching device and application of Hadoop trunking system | |
EP4128056A1 (en) | Partitioning for an execution pipeline | |
CN106934539A (en) | It is a kind of with limited and expense restriction workflow schedule method | |
CN108108242B (en) | Storage layer intelligent distribution control method based on big data | |
CN103338246B (en) | Virtual machine system of selection in a kind of infrastructure cloud resource allocation process and system | |
Brinkmann et al. | Scheduling shared continuous resources on many-cores | |
Herrmann et al. | Memory-aware list scheduling for hybrid platforms | |
CN104915250A (en) | Method for realizing MapReduce data localization in operations | |
CN109783189A (en) | A kind of quiescent operation stream scheduling method and device | |
CN108958919A (en) | More DAG task schedule expense fairness assessment models of limited constraint in a kind of cloud computing | |
CN103679404A (en) | Multi-project resource balancing and optimizing method | |
CN115033374A (en) | Task-to-thread matching method of multi-core programmable controller |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |