CN103455375A - Load-monitoring-based hybrid scheduling method under Hadoop cloud platform - Google Patents

Load-monitoring-based hybrid scheduling method under Hadoop cloud platform Download PDF

Info

Publication number
CN103455375A
CN103455375A CN2013100387467A CN201310038746A CN103455375A CN 103455375 A CN103455375 A CN 103455375A CN 2013100387467 A CN2013100387467 A CN 2013100387467A CN 201310038746 A CN201310038746 A CN 201310038746A CN 103455375 A CN103455375 A CN 103455375A
Authority
CN
China
Prior art keywords
resource
time
load
tcir
cloud platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100387467A
Other languages
Chinese (zh)
Other versions
CN103455375B (en
Inventor
李千目
陆路
侯君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Golden number information technology (Suzhou) Co., Ltd.
Original Assignee
LIANYUNGANG RESEARCH INSTITUTE OF NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LIANYUNGANG RESEARCH INSTITUTE OF NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY filed Critical LIANYUNGANG RESEARCH INSTITUTE OF NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY
Priority to CN201310038746.7A priority Critical patent/CN103455375B/en
Publication of CN103455375A publication Critical patent/CN103455375A/en
Application granted granted Critical
Publication of CN103455375B publication Critical patent/CN103455375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a load-monitoring-based hybrid scheduling method under a Hadoop cloud platform. A load-monitoring-based hybrid scheduling scheme is provided by analyzing scheduling effects and respective application scenarios of a Max-D algorithm, an FIFO algorithm and a fair scheduling algorithm. The one the most proper for the current load condition is selected from the three algorithms by monitoring a system load in real time. Compared with the single application of one scheduling algorithm, the scheme has significant advantages and is adaptive to changes in the load in a Hadoop system, so that the system can retain good performance.

Description

Mixed scheduling method based on load monitoring under Hadoop cloud platform
Technical field
The present invention relates to cloud platform job scheduling method, especially relate to the mixture operation dispatching method based on load monitoring under a kind of Hadoop cloud platform.
Technical background
The wireless-aware network technology is a technology of gradually rising in recent years, and integrated sensor technology, embedded computing technique, distributed information processing and the communication technology, be a kind of brand-new Information acquisi-tion technology.It is different from legacy network at aspects such as theory design, application realization, developing direction, so the theoretical foundation of sensor network and system and traditional network system make a big difference.Hadoop is a distributed system with good extendability of high reliability, run application on the cluster that can form at a large amount of cheap hardware devices, and provide one group of reliable and stable interface for application program.
The existing dispatching method regulation goal of Hadoop is all more single, only can meet single loading condition, and under different loading conditions, performance has very large difference, has caused job scheduling method adaptability poor, can not meet complicacy and the diversity of cloud platform.In Hadoop, job scheduling method is widely used FIFO method or equity dispatching method.The FIFO method is distributed operation according to the submission time of operation (job), the operation that priority allocation is first submitted to.The FIFO method realizes simple, and scheduling consumes few, but, in the face of mass data processing the time, some needs a large amount of operation meetings of calculating to take for a long time resource, and follow-up operation will slowly can not get carrying out, and affect the performance of system and user's experience.Equity dispatching method guarantees that the acquisition resource that operation can be fair is carried out, but the average deadline of operation is longer.
Summary of the invention
The purpose of this invention is to provide the mixed scheduling method under a kind of efficient Hadoop cloud platform, thus the complicacy of reply cloud platform.
The technical solution that realizes the object of the invention is:
1, the resource (taskTracker) in the Hadoop system continue to send the real-time information of self node to Centroid (jobTracker), comprise whether idle, executed time and the implementation progress of executing the task.
2, system load is monitored in real time, the loading level in the real-time information computing system sent according to resource, system load refers to that system carves the task amount of carrying out at a time:
M resource arranged in supposing the system, and wherein idle resource number is k, and pending number of tasks is numT, and the degree of load of define system is:
HL = numT m
If 0≤HL<1, system is carried in low;
If 1≤HL≤2, system is in load balance;
If HL>2, system is in overload.
3, when resource is arranged to Centroid request task (task), according to real-time loading level selection scheduling scheme: carry in situation system resource is low, use the FIFO dispatching method, reduce scheduling and consume; Use equity dispatching method in the load balance situation, improve the fairness of system, guarantee that operation can be carried out; Use the Max-D dispatching method under overload situations, make the average deadline of operation shorten.
The Max-D dispatching method realization flow related in 3 is as follows:
The first step: determine the set of all computational resources in cloud environment and idling-resource.
Second step: operation to be allocated is submitted to and successively is ranked into queue by operation, and the new operation of submitting to is added into this queue afterbody.
The 3rd step: the operation after sequence is dispatched, adopt the Max-D method to select suitable resource to be carried out.
For the Max-D method of the 3rd step in 3, its step is as follows:
Step 3.1: to all operations to be allocated, the average estimation working time of computational tasks on all computational resources;
Step 3.2: average estimation working time of calculating each operation with and on the computational resource of single free time the difference Di between minimum working time, and record this computational resource;
Step 3.3: find the operation of difference Di maximum in All Jobs, and this Di is designated as to D;
Step 3.4: if D >=0 is assigned operation and processed to the resource of record, simultaneously this resource is removed from the idling-resource set; If D<0, redefine resource and the idling-resource set of distribution, by completing it, distribute the resource of operation to join in the idling-resource set, then return to step 3.1.
Step 3.5: repeating step 3.2 has distributed operation to step 3.4 until for the resource of all application operations.
In step 3.1, the average estimated time to completion method of computational resource is as follows:
Suppose that cloud environment is by n unallocated operation T={t 1, t 2... t nand m resource R={r 1, r 2... r mform, each resource can only be processed an operation simultaneously; Resource number idle in resource is k, is designated as R'={r 1', r 2' ... r k', k<m wherein; The estimation working time of operation ti on resource rj is TCirj, and the average operating time of operation ti on all resources is
Figure 902512DEST_PATH_GDA0000386662090000021
The deadline of operation ti on resource rj is residual completion time and the deadline sum of operation ti on rj of the operation just carried out on rj.
Suppose in cloud environment, for the same class operation, the speed that resource is processed is directly proportional to the data volume of its processing.The Estimated Time Of Completion of operation i on resource r is the residual completion time of running job and operation i execution time sum on resource r on resource r just:
TCir j ( k + 1 ) = RTCir j ( k ) &times; 1 - pro pro + [ ( 1 - &rho; ) TCir j ( k ) M ( k ) + &rho; RTCir j ( k ) M ( k ) pro ] &times; M ( k + 1 ) , r j &Element; R - - - ( 1 )
Wherein, TCir j(k+1) mean required deadline of resource rj processing operation ti, TCir j(k) mean the prediction deadline of previous operation on resource rj; M (k) is the ratio of this operation required time of operation and run unit operation required time; RTCir j(k) mean previous operation actual run time on rj, pro (0<pro≤1) means the completed percentage of previous operation, if resource rj is idling-resource, i.e. previous operation is complete, pro=1, and above-mentioned formula can be reduced to
TCir j ( k + 1 ) = [ ( 1 - &rho; ) TCir j ( k ) M ( k ) + &rho; RTCir j ( k ) M ( k ) ] &times; M ( k + 1 ) , r j &Element; R &prime; - - - ( 2 )
Estimation execution time TCir by previous operation on this resource jand actual execution time RTCir (k) j(k), use formula (1) to estimate the execution time of operation on certain resource that obtains not dispatching.Yet, the stage just started in system, also do not carry out operation on each resource, the execution time of resource can't be estimated by the implementation status of previous operation.Therefore when system just starts, for all resources, make
TCir j(0)=RTCir j(0)=0 (3)
At first pending like this operation meeting selects the resource of not carrying out operation to be carried out, and after resource executes first operation, has just obtained the actual execution time RTCir of operation j(1), make TCir j(1) equal RTCir j(1), can be estimated the working time of operation afterwards according to formula (1).
In step 3.2, the method for calculated difference D is as follows:
Operation ti is designated as mUTC the minimum working time on the node of all unallocated work i=min{TCir 1', TCir 2' ,-, TCir k', record and meet TCir j'=mUTC iunallocated operation rj ', and the note BR i=r j', then according to formula D i=AvgTC i-mUTC i, obtain the difference Di of operation i.
Compared with prior art, its remarkable advantage: 1, the present invention is by the monitoring to system load, and real-time is the suitable dispatching method of task choosing to be allocated, and the system that guaranteed can keep efficient performance all the time under different system states in the present invention; 2, job scheduling of the present invention only can be assigned to operation on idle resource, has guaranteed the equilibrium of load under the cloud environment, not there will be the part resource overload and the situation of other resource free time; 3, the present invention is that most suitable resource is selected in operation by the Max-D method, has reduced the average deadline of operation, has improved the throughput of system.
The accompanying drawing explanation
Accompanying drawing is of the present invention based on load mixed scheduling strategy schematic diagram.
embodiment
Below in conjunction with accompanying drawing, the present invention will be further described.
M resource arranged in supposing the system, and wherein idle resource number is k, and pending number of tasks is numT, and the degree of load of define system is:
HL = numT m
If 0≤HL<1, system is carried in low;
If 1≤HL≤2, system is in load balance;
If HL>2, system is in overload.
Mixed scheduling strategy based on load monitoring comprises a load monitor and a scheduling selector.Load monitor is responsible for the load of system is monitored and calculated, and the loading condition that scheduling selector provides according to load monitor is dispatched task.
When available free resource request tasks carrying, scheduling selector can obtain real-time system loading conditions by load sensor, then according to load state, selects corresponding job scheduling method.
1, when system in low load, for example, when system has just started, the idling-resource quantity in system such as is greater than at the pending task, scheduling selector can be selected the FIFO dispatching method.When the resource bid task is arranged, the FIFO algorithm can be sorted to the pending operations such as all according to submission time, then selects first operation, and the task in this operation is distributed to resource.For doing the selection of task in the industry, the FIFO algorithm can be waited for again carrying out of task after the priority allocation failure.Now the FIFO dispatching method can reduce scheduling consumption and average deadline of operation.
2, along with the operation of system, the load meeting increases the load balancing state that reaches gradually, but does not also reach the peak value of load.No matter the FIFO method is average deadline of operation or fairness, and performance has all started to descend, and now scheduler can select equity dispatching method as dispatching method, to guarantee the user, obtains the fairness of resource.
3, further increase and reach overload level when load, equity dispatching method can cause the average deadline of operation to increase fast.Now scheduling selector can be selected the Max-D dispatching method, although the resource distributional equity can reduce, but can significantly reduce the average deadline of operation.
Wherein Max-D dispatching algorithm embodiment is as follows:
Suppose that cloud environment is by n unallocated operation T={t 1, t 2... t nand m resource R={r 1, r 2... r mform, each resource can only be processed an operation simultaneously; Resource number idle in resource is k, is designated as R'={r 1', r 2' ... r k', k<m wherein; The estimation working time of operation ti on resource rj is TCirj, and the average operating time of operation ti on all resources is
Figure 647429DEST_PATH_GDA0000386662090000041
operation ti is designated as mUTC the minimum working time on the node of all unallocated work i=min{TCir 1', TCir 2' ..., TCir k', record and meet TCir j'=mUTC iunallocated operation rj ', and the note BR i=r j';
When the operation set non-NULL of needs scheduling, carry out following operation:
Step 1: All Jobs in operation set T is calculated respectively to AvgTC i;
Step 2: each operation ti is found to mUTC i, and calculate D i=AvgTC i-mUTC i;
Step 3: find operation ti, make D i=Max{D 1, D 2..., D n, if there are a plurality of operations to satisfy condition, the select progressively ti arrived according to these operations;
Step 4: if D i>=0, assign operation ti and processed to resource BRi, resource BRi is removed from idling-resource set R ' simultaneously; If D i<0, resource and the idling-resource set of reappraising and distributing, distribute the resource of operation to join in the idling-resource set by completing it, then returns to step (1).
Step 5: repeating step 2 has distributed operation to step 4 until for the resource of all application operations.
The deadline of operation ti on resource rj is residual completion time and the deadline sum of operation ti on rj of the operation just carried out on rj.
The present invention's hypothesis is for the same class operation, and the speed that resource is processed is directly proportional to the data volume of its processing.The Estimated Time Of Completion of operation i on resource r is the residual completion time of running job and operation i execution time sum on resource r on resource r just:
TCir j ( k + 1 ) = RTCir j ( k ) &times; 1 - pro pro + [ ( 1 + &rho; ) TCir j ( k ) M ( k ) + &rho; RTCir j ( k ) M ( k ) pro ] &times; M ( k + 1 ) , r j &Element; R
Wherein, TCir j(k+1) mean that operation ti is distributed to resource rj processes required deadline, TCir j(k) mean the prediction deadline of previous operation on resource rj; M (k) is the ratio of this operation required time of operation and run unit operation required time; RTCir j(k) mean previous operation actual run time on rj, pro (0<pro≤1) means the completed percentage of previous operation, if resource rj is idling-resource, i.e. previous operation is complete, pro=1, and above-mentioned formula can be reduced to
TCir j ( k + 1 ) = [ ( 1 - &rho; ) TCir j ( k ) M ( k ) + &rho; RTCir j ( k ) M ( k ) ] &times; M ( k + 1 ) , r j &Element; R &prime;
According to formula, the execution time of operation on certain resource of scheduling can be by the estimation execution time TCir of previous operation on this resource jand actual execution time RTCir (k) j(k) estimated.Yet, the stage just started in system, also do not carry out operation on each resource, the execution time of resource can't be estimated by the implementation status of previous operation.Therefore when system just starts, for all resources, make
TCir j(0)=RTCir j(0)=0
At first pending like this operation meeting selects not carry out the resource of operation, after resource executes first operation, has just obtained the actual execution time RTCir of operation j(1), make TCir j(1) equal RTCir j(1), can be estimated the working time of operation afterwards according to formula (1).

Claims (6)

1. the mixed scheduling method based on load monitoring under a Hadoop cloud platform is characterized in that method is as follows:
(1) whether the resource taskTracker in the Hadoop system continues the real-time information of transmission self node to Centroid jobTracker, real-time information comprises executed time and implementation progress idle, that executing the task;
(2) system load is monitored in real time to the loading level in the real-time information computing system sent according to resource:
(3) when resource is arranged to Centroid request task task, according to real-time loading level selection scheduling scheme: carry in situation system resource is low, use the FIFO dispatching method, reduce scheduling and consume; Use equity dispatching method in the load balance situation, improve the fairness of system, guarantee that operation can be carried out; Use the Max-D dispatching method under overload situations, make the average deadline of operation shorten.
2. according to the mixed scheduling method based on load monitoring under the Hadoop cloud platform described in claim 1, it is characterized in that: system load refers to that system carves the task amount of carrying out at a time in (2); System load is monitored in real time, and the method for the loading level in the real-time information computing system sent according to resource is: m resource arranged in supposing the system, and wherein idle resource number is k, and pending number of tasks is numT, and the degree of load of define system is:
If 0≤HL<1, system is carried in low;
If 1≤HL≤2, system is in load balance;
If HL>2, system is in overload.
3. according to the mixed scheduling method based on load monitoring under the Hadoop cloud platform described in claim 1, it is characterized in that, the Max-D dispatching method realization flow related in (3) is as follows:
The first step: determine the set of all computational resources in cloud environment and idling-resource;
Second step: operation to be allocated is submitted to and successively is ranked into queue by operation, and the new operation of submitting to is added into this queue afterbody;
The 3rd step: the operation after sequence is dispatched, adopt the Max-D method to select suitable resource to be carried out.
4. according to the mixed scheduling method based on load monitoring under the Hadoop cloud platform described in claim 1 or 3, it is characterized in that, the Max-D method of the 3rd step in described (3), its step is as follows:
Step 3.1: to all operations to be allocated, the average estimation working time of computational tasks on all computational resources;
Step 3.2: average estimation working time of calculating each operation with and on the computational resource of single free time the difference Di between minimum working time, and record this computational resource;
Step 3.3: find the operation of difference Di maximum in All Jobs, and this Di is designated as to D;
Step 3.4: if D >=0 is assigned operation and processed to the resource of record, simultaneously this resource is removed from the idling-resource set; If D<0, redefine resource and the idling-resource set of distribution, by completing it, distribute the resource of operation to join in the idling-resource set, then return to step 3.1;
Step 3.5: repeating step 3.2 has distributed operation to step 3.4 until for the resource of all application operations.
5. according to the mixed scheduling method based on load monitoring under the Hadoop cloud platform described in claim 4, it is characterized in that, in step 3.1, the average estimated time to completion method of computational resource is as follows:
Suppose that cloud environment is by n unallocated operation T={t 1, t 2... t nand m resource R={r 1, r 2... r mform, each resource can only be processed an operation simultaneously; Resource number idle in resource is k, is designated as R'={r 1', r 2' ... r k', k<m wherein; The estimation working time of operation ti on resource rj is TCirj, and the average operating time of operation ti on all resources is
Figure RE-FDA0000386662080000021
The deadline of operation ti on resource rj is residual completion time and the deadline sum of operation ti on rj of the operation just carried out on rj;
Suppose in cloud environment, for the same class operation, the speed that resource is processed is directly proportional to the data volume of its processing, and the Estimated Time Of Completion of operation i on resource r is the residual completion time of running job and operation i execution time sum on resource r on resource r just:
Wherein, TCir j(k+1) mean required deadline of resource rj processing operation ti, TCir j(k) mean the prediction deadline of previous operation on resource rj; M (k) is the ratio of this operation required time of operation and run unit operation required time; RTCir j(k) mean previous operation actual run time on rj, pro (0<pro≤1) means the completed percentage of previous operation, if resource rj is idling-resource, i.e. previous operation is complete, pro=1, and above-mentioned formula can be reduced to
Figure RE-FDA0000386662080000023
Estimation execution time TCir by previous operation on this resource jand actual execution time RTCir (k) j(k), use formula (1) to estimate the execution time of operation on certain resource that obtains not dispatching;
When system just starts, for all resources, order
TCir j(0)=RTCir j(0)=0 (3)
The pending resource of not carrying out operation of at first selecting is carried out, and after resource executes first operation, has just obtained the actual execution time RTCir of operation j(1), make TCir j(1) equal RTCir j(1), estimated the working time of operation afterwards according to formula (1).
6. according to the mixed scheduling method based on load monitoring under the Hadoop cloud platform described in claim 4, it is characterized in that, in step 3.2, the method for calculated difference D is as follows:
Operation ti is designated as mUTC the minimum working time on the node of all unallocated work i=min{TCir 1', TCir 2' ..., TCir k', record and meet TCir j'=mUTC iunallocated operation rj ', and the note BR i=r j', then according to formula D i=AvgTC i-mUTC i, obtain the difference Di of operation i.
CN201310038746.7A 2013-01-31 2013-01-31 Load-monitoring-based hybrid scheduling method under Hadoop cloud platform Active CN103455375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310038746.7A CN103455375B (en) 2013-01-31 2013-01-31 Load-monitoring-based hybrid scheduling method under Hadoop cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310038746.7A CN103455375B (en) 2013-01-31 2013-01-31 Load-monitoring-based hybrid scheduling method under Hadoop cloud platform

Publications (2)

Publication Number Publication Date
CN103455375A true CN103455375A (en) 2013-12-18
CN103455375B CN103455375B (en) 2017-02-08

Family

ID=49737781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310038746.7A Active CN103455375B (en) 2013-01-31 2013-01-31 Load-monitoring-based hybrid scheduling method under Hadoop cloud platform

Country Status (1)

Country Link
CN (1) CN103455375B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094982A (en) * 2014-09-23 2015-11-25 航天恒星科技有限公司 Multi-satellite remote sensing data processing system
CN105373429A (en) * 2014-08-20 2016-03-02 腾讯科技(深圳)有限公司 Task scheduling method, device and system
CN105592160A (en) * 2015-12-30 2016-05-18 南京邮电大学 Service-consumer-oriented resource configuration method in cloud computing environment
CN105868070A (en) * 2015-12-25 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and apparatus for determining resources consumed by tasks
CN105955816A (en) * 2016-04-15 2016-09-21 天脉聚源(北京)传媒科技有限公司 Event scheduling method and device
CN108449376A (en) * 2018-01-31 2018-08-24 合肥和钧正策信息技术有限公司 A kind of load-balancing method of big data calculate node that serving enterprise
US10067798B2 (en) 2015-10-27 2018-09-04 International Business Machines Corporation User interface and system supporting user decision making and readjustments in computer-executable job allocations in the cloud

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173038A1 (en) * 2010-01-12 2011-07-14 Nec Laboratories America, Inc. Constraint-conscious optimal scheduling for cloud infrastructures
CN102629219A (en) * 2012-02-27 2012-08-08 北京大学 Self-adaptive load balancing method for Reduce ends in parallel computing framework
US20120271939A1 (en) * 2007-02-02 2012-10-25 The Mathworks, Inc. Scalable architecture
CN102799486A (en) * 2012-06-18 2012-11-28 北京大学 Data sampling and partitioning method for MapReduce system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120271939A1 (en) * 2007-02-02 2012-10-25 The Mathworks, Inc. Scalable architecture
US20110173038A1 (en) * 2010-01-12 2011-07-14 Nec Laboratories America, Inc. Constraint-conscious optimal scheduling for cloud infrastructures
CN102629219A (en) * 2012-02-27 2012-08-08 北京大学 Self-adaptive load balancing method for Reduce ends in parallel computing framework
CN102799486A (en) * 2012-06-18 2012-11-28 北京大学 Data sampling and partitioning method for MapReduce system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KC, K.等: "Scheduling Hadoop Jobs to Meet Deadlines", 《CLOUD COMPUTING TECHNOLOGY AND SCIENCE (CLOUDCOM), 2010 IEEE SECOND INTERNATIONAL CONFERENCE ON》 *
RASOOLI, A.等: "A Hybrid Scheduling Approach for Scalable Heterogeneous Hadoop Systems", 《HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS (SCC), 2012 SC COMPANION》 *
任萱萱: "基于Hadoop平台的作业调度研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
陈艳金: "MapReduce模型在Hadoop平台下实现作业调度算法的研究和改进", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373429A (en) * 2014-08-20 2016-03-02 腾讯科技(深圳)有限公司 Task scheduling method, device and system
CN105094982A (en) * 2014-09-23 2015-11-25 航天恒星科技有限公司 Multi-satellite remote sensing data processing system
US10067798B2 (en) 2015-10-27 2018-09-04 International Business Machines Corporation User interface and system supporting user decision making and readjustments in computer-executable job allocations in the cloud
US10552223B2 (en) 2015-10-27 2020-02-04 International Business Machines Corporation User interface and system supporting user decision making and readjustments in computer-executable job allocations in the cloud
US11030011B2 (en) 2015-10-27 2021-06-08 International Business Machines Corporation User interface and system supporting user decision making and readjustments in computer-executable job allocations in the cloud
CN105868070A (en) * 2015-12-25 2016-08-17 乐视网信息技术(北京)股份有限公司 Method and apparatus for determining resources consumed by tasks
CN105592160A (en) * 2015-12-30 2016-05-18 南京邮电大学 Service-consumer-oriented resource configuration method in cloud computing environment
CN105592160B (en) * 2015-12-30 2019-09-13 南京邮电大学 Resource allocation method towards service consumer under a kind of cloud computing environment
CN105955816A (en) * 2016-04-15 2016-09-21 天脉聚源(北京)传媒科技有限公司 Event scheduling method and device
CN108449376A (en) * 2018-01-31 2018-08-24 合肥和钧正策信息技术有限公司 A kind of load-balancing method of big data calculate node that serving enterprise

Also Published As

Publication number Publication date
CN103455375B (en) 2017-02-08

Similar Documents

Publication Publication Date Title
CN103455375A (en) Load-monitoring-based hybrid scheduling method under Hadoop cloud platform
US10474504B2 (en) Distributed node intra-group task scheduling method and system
CN109582448B (en) Criticality and timeliness oriented edge calculation task scheduling method
CN105900064A (en) Method and apparatus for scheduling data flow task
US20100125847A1 (en) Job managing device, job managing method and job managing program
CN106445675B (en) B2B platform distributed application scheduling and resource allocation method
CN103401939A (en) Load balancing method adopting mixing scheduling strategy
CN103401947A (en) Method and device for allocating tasks to multiple servers
CN101499019B (en) Carrier-grade Ethernet system and real-time task scheduling method used for the same
CN103338228A (en) Cloud calculating load balancing scheduling algorithm based on double-weighted least-connection algorithm
CN102902587A (en) Distribution type task scheduling method, distribution type task scheduling system and distribution type task scheduling device
CN102096599A (en) Multi-queue task scheduling method and related system and equipment
KR20080041047A (en) Apparatus and method for load balancing in multi core processor system
CN103257896B (en) A kind of Max-D job scheduling method under cloud environment
CN101145112A (en) Real-time system task scheduling method
CN111176840B (en) Distribution optimization method and device for distributed tasks, storage medium and electronic device
CN104461748A (en) Optimal localized task scheduling method based on MapReduce
CN107291544A (en) Method and device, the distributed task scheduling execution system of task scheduling
CN113535393B (en) Computing resource allocation method for unloading DAG task in heterogeneous edge computing
Muthuvelu et al. On-line task granularity adaptation for dynamic grid applications
CN113986497B (en) Queue scheduling method, device and system based on multi-tenant technology
CN115640113A (en) Multi-plane flexible scheduling method
CN110535894B (en) Dynamic allocation method and system for container resources based on load feedback
CN107797870A (en) A kind of cloud computing data resource dispatching method
Naik et al. Scheduling tasks on most suitable fault tolerant resource for execution in computational grid

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zhou Honghai

Inventor before: Li Qianmu

Inventor before: Lu Lu

Inventor before: Hou Jun

CB03 Change of inventor or designer information
TR01 Transfer of patent right

Effective date of registration: 20171220

Address after: 215000 room No. 388 F building, Suzhou Industrial Park, Jiangsu Province, room 307

Patentee after: Golden number information technology (Suzhou) Co., Ltd.

Address before: 222000 No. 2 Chenguang Road, Sinpo District, Jiangsu, Lianyungang

Patentee before: Lianyungang Research Institute of Nanjing University of Science and Technology

TR01 Transfer of patent right