CN107992359B - Task scheduling method for cost perception in cloud environment - Google Patents

Task scheduling method for cost perception in cloud environment Download PDF

Info

Publication number
CN107992359B
CN107992359B CN201711205255.1A CN201711205255A CN107992359B CN 107992359 B CN107992359 B CN 107992359B CN 201711205255 A CN201711205255 A CN 201711205255A CN 107992359 B CN107992359 B CN 107992359B
Authority
CN
China
Prior art keywords
task
tasks
resources
scheduling
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711205255.1A
Other languages
Chinese (zh)
Other versions
CN107992359A (en
Inventor
陈智也
孙杰
丁有伟
沈祥红
李鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Sea Level Data Technology Co ltd
Original Assignee
Jiangsu Sea Level Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Sea Level Data Technology Co ltd filed Critical Jiangsu Sea Level Data Technology Co ltd
Priority to CN201711205255.1A priority Critical patent/CN107992359B/en
Publication of CN107992359A publication Critical patent/CN107992359A/en
Application granted granted Critical
Publication of CN107992359B publication Critical patent/CN107992359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a task scheduling method for cost perception in a cloud environment, relates to the application fields of distributed data management, cloud computing and the like, and provides a universal task scheduling method which can process interactive tasks and analysis tasks and can ensure the minimum cost in the task processing process. Receiving tasks submitted by all users, and adding the tasks into a task queue; merging the waiting queue into a task queue, and emptying the waiting queue; respectively calculating resources required by each task in the task queue according to the task type; sequencing all tasks in the task queue from more to less according to resource requirements; and scheduling the sequenced tasks one by one. A unified scheduling algorithm is provided for scheduling interactive tasks and general analysis tasks, and user satisfaction and task processing throughput are improved.

Description

Task scheduling method for cost perception in cloud environment
Technical Field
The invention discloses a task scheduling algorithm for cost perception in a cloud environment, relates to the application fields of distributed data management, cloud computing and the like, and provides a universal task scheduling method which can process interactive tasks and analysis tasks and can ensure the minimum cost in the task processing process.
Background
Big data and cloud computing have become new interests of the current IT industry, various big IT enterprises launch commercial cloud platforms and big data processing schemes thereof, a plurality of research institutions conduct theoretical research and application exploration on the core technology of the cloud computing and big data, and governments of various countries also put the cloud computing and big data into national planning and invest huge resources. At present, cloud computing is applied to many fields, various open source platforms and commercial cloud services are endless, and the cloud computing has not only private cloud services but also a large number of services such as public clouds, community clouds and mixed clouds. In sum, cloud computing is becoming larger and larger, and services are becoming cheaper and cheaper.
Task scheduling is one of core technologies of cloud computing, and is responsible for distributing tasks submitted by multiple users or tasks submitted by one user to each physical server of a cloud environment for execution, so that the user can be ensured to quickly obtain a correct result. The task scheduling method in the current cloud environment generally has strong task dependency, that is, different scheduling methods are designed for different types of tasks. According to the task response time, the user tasks can be divided into interactive tasks and general analysis tasks, for the interactive tasks, a task scheduling method aiming at improving the task processing time is designed, and for the general analysis tasks, a task scheduling method aiming at improving the system throughput is designed. According to the relation between tasks, the tasks are divided into two types of simple independent tasks and complex tasks with dependency relations, various task scheduling methods based on the classical NP-hard solution are designed for the simple independent tasks, and corresponding scheduling strategies are designed for the complex tasks based on the idea of a DAG (directed automaton) graph. Tasks can be divided into computation-intensive tasks, data-intensive tasks, network-intensive tasks and the like according to task request resources, the idea of computation-intensive task scheduling is to adjust the state of a processor, the data-intensive task scheduling mainly utilizes a copy strategy of a cloud platform, and the network-intensive task scheduling mainly aims at reducing network data transmission.
Simple tasks can be regarded as special cases of complex tasks, and a uniform scheduling method is easily designed for the simple tasks and the complex tasks. The compute intensive tasks, the data intensive tasks and the network intensive tasks have different resource requirements on the nodes, and the available resources of the nodes are represented by multidimensional vectors in the task scheduling process, so that a uniform scheduling method can be realized. However, the interactive tasks and the analysis tasks need the same resources and have no inclusion relationship, but the optimization targets of the two tasks are different, so that few scheduling methods can simultaneously process the interactive tasks and the analysis tasks at present.
In summary, although the task processing time is mainly used as an optimization target in the current task scheduling method in the cloud environment, the measurement of the user satisfaction is not clear, different scheduling algorithms are respectively designed for interactive tasks and analysis tasks, and concepts such as task priorities are introduced, so that the system complexity is increased, and meanwhile, the task scheduling cost is not obviously optimized.
Disclosure of Invention
Aiming at the defects, the invention provides a task scheduling method for cost perception in a cloud environment, which provides a unified scheduling framework for interactive tasks and analysis tasks, aims at optimizing user satisfaction and simultaneously improves the throughput of task processing; the scheduling strategy for cost perception of interactive tasks and general analysis tasks in a cloud environment is solved; the cost in the application refers to the time cost spent in the process of waiting for a task processing result after a user submits a task, the higher the cost is, the lower the user satisfaction is, and the higher the user satisfaction is, the cost for reducing task processing is.
The invention is realized by adopting the following technical scheme:
a task scheduling algorithm with cost perception in a cloud environment comprises the following steps:
1) receiving tasks submitted by all users, and adding the tasks into a task queue;
2) merging the waiting queue into a task queue, and emptying the waiting queue;
3) respectively calculating resources required by each task in the task queue according to the task type;
4) sequencing all tasks in the task queue from more to less according to resource requirements;
5) scheduling the tasks sequenced in the step 4) one by one, wherein the scheduling process is as follows;
5-1) if the resource requirement of a certain task is greater than the maximum resource of the node, directly terminating the task; otherwise, searching a node which can execute the task currently, if the node is found, executing the task on the node, otherwise, adding the task into a waiting queue;
5-2) if a certain task is completed, releasing the resources occupied by the task and updating the available resources of the node where the task is located;
5-3) judging whether the task queue is empty, if not, returning to the step 5) to enter the scheduling of the next task; and if the task queue is empty, returning to the step 1) and continuing the next scheduling period.
The method for merging the task queues in the step 2) is to merge the task newly submitted by the current user with the waiting task which is not scheduled and completed before, and all the merged tasks are taken as the scheduling objects of the current scheduling period.
When calculating the resources required by each task in step 3), if the task is an analysis type task, the required resources are the minimum resources for ensuring that the task is completed within the specified time, the required resources are the ratio of the calculated amount of the task to the time difference from the current time to the task Deadline, and the longer the waiting time is, the more the required resources are for the same analysis task; if the task is an interactive task, the required resource is the ratio of the calculated amount of the task to the time difference of the task Deadline.
In step 5), the required amount of the task on the computing resources on the cloud platform must be determined firstly during task scheduling and task execution, and only the server with enough computing resources can execute the corresponding task, so that interruption of the task due to insufficient resources in the execution process is avoided as much as possible. In the task allocation process, the task with the minimum resource demand is allocated preferentially, so that the task with the minimum failure can be ensured, and the task processing cost is reduced.
In step 5), when a certain server releases part of available resources, immediately updating the available resources of the server to ensure more available resources in the next task allocation process; if no task to be scheduled can be allocated to the server for execution, the newly released resource is used for executing the task currently being processed, the computing resource of the cloud platform is fully utilized, the task execution speed is accelerated, and more computing resources are released.
In the task scheduling process, as the waiting time of each waiting task increases, the required resources of each waiting task gradually increase, so that the minimum resource demand of each waiting task must be recalculated during task allocation, as shown in step 3), so as to ensure that the tasks can be completed within the time threshold; if the minimum resource demand of a certain waiting state task exceeds the maximum resource demand of each server in the cloud platform, terminating the task and informing a user, as shown in step 5-1).
The invention mainly solves the problem of a scheduling strategy for cost perception of interactive tasks and general analysis tasks in a cloud environment. A unified scheduling algorithm is provided for scheduling interactive tasks and general analysis tasks, the user satisfaction and the task processing throughput are improved as optimization targets, and a task scheduling method with cost perception in a cloud environment is designed. Compared with the traditional task scheduling method in the cloud environment, the method has the following advantages: the method is characterized in that the satisfaction degree of a user on the task processing efficiency of the cloud platform is measured by time cost based on a human-computer interaction idea, and the method is close to practical application; secondly, the task scheduling method can be simultaneously suitable for scheduling interactive tasks and general analysis tasks; and thirdly, system resources are fully utilized, the idle of the resources is reduced, and the execution of tasks which cannot be completed on time is terminated in advance.
Drawings
The invention will be further explained with reference to the drawings, in which:
FIG. 1 is a system architecture diagram of the present invention;
FIG. 2 is a flow chart of the algorithm of the present invention;
fig. 3 is a schematic diagram of a specific embodiment of the algorithm of the present invention.
Detailed Description
In a cloud environment, the types of tasks submitted by users are various, including interactive tasks with strict response time requirements and analysis-type tasks with loose response time requirements, and how to schedule the tasks has a great influence on the users and cloud service providers. For a user, a process of waiting for a task processing result needs to spend a certain time cost, and when the cost is high, the user may give up waiting for the task to be executed or even no longer use the cloud service. For a cloud service provider, the efficient task scheduling method can save resources and attract more customers, so that economic benefits are improved.
The invention aims to solve the problems that when a user submits a plurality of interactive tasks or analysis tasks to a cloud platform, the tasks can be scheduled in real time, the interactive tasks and the analysis tasks are processed uniformly, the task throughput is improved, the task processing cost is reduced, and the requirements of different users are met.
The implementation of the technical scheme is as follows:
assume that tasks submitted by users in a cloud environment fall into two categories: firstly, interactive tasks are required, and a cloud platform needs to return a task processing result as soon as possible so that a user can check the task processing result and then perform the next operation; and secondly, analyzing the class tasks, and the cloud platform only needs to return task processing results in the deadlines given by the user. If at a certain momenttThe user submitsnAn interactive taskJI={JI 1,JI 2,...,JI n Andman analysis class taskJA={JA 1,JA 2,...,JA m Each interactive task is noted asJI i =(s i ,r i ) Each analysis class task is recorded asJA j =(s j ,r j ,d j ) Wherein 1 is less than or equal toin,1≤jms i Ands j respectively representing tasksJI i AndJA j the time of submission of (a) is,r i andr j respectively representing tasksJI i AndJA j the amount of calculation of (a) is,d j representing tasksJA j The Deadline of (1).
The cost of processing each task by the cloud platform is the time cost of a user in the process of waiting for the task to be completed after submitting the task, namelyCost(J i )=α*t i Wherein
Figure DEST_PATH_IMAGE002
representing any task, both interactive and analytic-like,αrepresents the time cost factor, i.e. the cost the user spends waiting one second,t i representing tasksJ i Is processing time of, i.e. taskJ i Completion time off i And commit times i The difference value of (a) to (b),t i =f i -s i . TaskJ i Including tasksJ i Waiting time ofw i And taskJ i Execution time ofe i I.e. byt i =w i +e i . According to the human-computer interaction theory, the shorter the task processing time is, the higher the user satisfaction is, and particularly for an interactive task, the longer the task processing time is, the higher the possibility that the user abandons the task processing is; for the analysis type task, the task can be ensured to be completed in the Deadline, otherwise, the user abandons the task processing and resubmits the task request. Thus, a time threshold for an interactive task is givenβMeans if the processing time of the interactive task is greater thanβAnd the user abandons the task processing, and obviously, the time threshold of the analysis type task is the Deadline of the task. In general, deadlines for analysis-class tasks are larger, while time thresholds for interactive tasksβIt is relatively small and thus interactive tasks are typically scheduled with priority during task scheduling.
In the cloud platform, tasks submitted by one or more users are received continuously, so a real-time scheduling strategy is adopted, namely, the tasks are scheduled immediately when some users send task requests. When the tasks are scheduled and executed, the demand of the tasks on computing resources on the cloud platform must be determined at first, and only the servers with enough computing resources can execute the corresponding tasks, so that the interruption of the tasks due to insufficient resources in the execution process is avoided as much as possible.
For interactionTask formJI i In particular, int 1The minimum resource requirement at a time isR i =r i /(s i +β-t 1) Whereins i t 1<s i +β(ii) a For analysis class tasksJA j In particular, int 2The minimum resource requirement at a time isR j =r i /(s j +d j t 2) Whereins j t 2<s j +d j . For any type of taskJ i It should be allocated to the currently available resources not less thanJ i The cloud platform task processing method can be implemented on the server with the least resource demand and the least available resources, and the throughput of the cloud platform task processing is improved as much as possible. If taskJ i The minimum resource demand is larger than the current available resources of each server in the cloud platform, and then the task is executedJ i And is in a waiting state. In the task allocation process, the task with the minimum resource demand is allocated preferentially, so that the task with the minimum failure can be ensured, and the task processing cost is reduced.
When some server releases part of available resources, the waiting task with the minimum resource demand not greater than the current available resources and the maximum minimum resource demand is allocated to the server for processing. If no waiting task can be allocated to the server for execution, the newly released resource is used for executing the task currently being processed, the computing resource of the cloud platform is fully utilized, the task execution speed is accelerated, and more computing resources are released. As the waiting time increases, each waiting task requires more resources, so that the task must be allocated with a minimum amount of resources to ensure that the task can be completed within the time threshold. And if the minimum resource demand of a certain waiting state task exceeds the maximum resource demand of each server in the cloud platform, terminating the task and informing a user.
In order to increase the satisfaction of the user, a task is determinedJ i Terminating tasks immediately upon failure to complete within a time thresholdJ i And sending a prompt of 'system busy and please try again later' to the user, so as to reduce the blind waiting of the user as much as possible. For the task which is terminated because the task cannot be completed within the time threshold, the task should be preferentially distributed when the user sends out the task request again, and the task is prevented from being terminated due to waiting for many times.
In fig. 1, the user layer submits tasks, including both interactive tasks and analysis-type tasks; the task layer updates the resource requirements of the current task according to the requirements of the task, then sorts the tasks according to the resource requirements, the task with the largest resource requirement is scheduled preferentially, the task scheduling process allocates the tasks to all working nodes to be executed, each task is executed on only one node, and if the current node cannot meet the requirements of a certain task, the task is in a waiting state; and the successfully distributed tasks are executed on each physical node of the resource layer, and after the tasks are completed, the nodes release the resources and update the available resources. When the resources are released, the tasks in the waiting state are rescheduled, and the system resources are fully utilized.
Algorithm 1. CATSC (Cost-Aware Task Scheduling in Cloud)
Input: Set of tasks JI={JI 1,JI 2,...,JI n }, JA={JA 1,JA 2,...,JA m }
Output: Schedule
While (JI≠Null || JA≠Null)
Collect the new arriving tasks into Candidate;
Compute R i of each task J i in Candidate;
Sort Candidate with increasing R i ;
Foreach task J i in Candidate do
Find the server S j with minimal AR j R i ;
Schedule.Insert(J i ,S j ,t);
If cannot find S j then
WaitingList.Insert(J i )
Candidate.Delete(J i );
If server S x release resource then
Compute R i of each task J i in WaitingList;
Find task J k with maximum R k AR x ;
Schedule.Insert(J k ,S x ,t);
WaitingList.Delete(J k );
If task J a with R a >max{AR b } then
Warning(‘Timeout’);
WaitingList.Delete(J a );
Candidate = WaitingList;
Wherein,JIandJArespectively representing a set of interactive tasks and a set of analysis-class tasks,Candidateindicating the queue of tasks to be allocated and,WaitingListindicating that the waiting task queue is to be used,J i indicating the first in the task queueiThe task is that the task is executed by the user,R i representing tasksJ i The minimum amount of resources required for the system,Scheduleindicating the situation of the assignment of the task,Schedule.Insert(J i ,S j ,t) To representtTask to be executed at any momentJ i Distribution to serversS j The upper side is executed in the upper part,WaitingList.Delete(J k ) Representing a taskJ k Is removed from the waiting queue and is,AR k presentation serverS k Is available.
The processing flow of the algorithm is as shown in fig. 2, firstly, the tasks submitted by the user are received, the resources required by each task are calculated, the tasks are sequenced from more to less according to the resource requirements, then the tasks are scheduled one by one, and the scheduling process is as follows: if the resource requirement of a certain task is greater than the maximum resource of the node, the task is directly terminated, otherwise, the node which can execute the task at present is found, if the node which can execute the task is found, the task is executed on the node, otherwise, the task is in a waiting state. And if a certain task is completed, releasing the resources occupied by the task and updating the available resources of each node.
The method is used as a core of a task scheduler and is deployed on a Master node in a cloud environment. The Master node collects the available resource information of each working node through the resource management function of the cloud environment, and the task scheduler allocates the tasks to each node to execute according to the current task information and the available resource information of the nodes. The specific implementation is as in the example of fig. 3. In fig. 3, user 1, user 2, and user 3 have submitted an interactive task (JI 1, JI2, and JI 3) and an analysis-class task (JA 1, JA2, and JA 3) at time t0, respectively, the cloud platform calculates the minimum resource demand of all the tasks after receiving the tasks submitted by the users, and forms the tasks into a task queue according to the descending order of the minimum resource demand, with the task with the maximum minimum resource demand at the head of the queue and the task with the minimum resource demand at the tail of the queue. And then the scheduler finds out all nodes capable of meeting the resource requirement of each task according to the idle resource condition of each current node of the cloud platform and the minimum resource requirement of each task and allocates the task to the node with the minimum available resource. And the task which is not successfully scheduled enters a waiting queue. When the server 1 finishes executing the current task and releases the resources at time t1, the task with the minimum resource demand quantity being the largest is selected from the waiting queue (JA 2) and distributed to the server 1 for execution. And when the server 2 and the server 3 finish the task releasing resources at the time t2, waiting for the task JA3 to be allocated to the server 2 for execution. And if the waiting queue is empty, no task scheduling is carried out any more, and all the current tasks are waited to be completed.

Claims (4)

1. A task scheduling method for cost perception in a cloud environment is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
1) receiving tasks submitted by all users, and adding the tasks into a task queue;
2) merging the waiting queue into a task queue, and emptying the waiting queue;
3) respectively calculating resources required by each task in the task queue according to the task type;
4) sequencing all tasks in the task queue from more to less according to resource requirements;
5) scheduling the tasks sequenced in the step 4) one by one, wherein the scheduling process is as follows;
5-1) if the resource requirement of a certain task is greater than the maximum resource of the node, directly terminating the task; otherwise, searching a node which can execute the task currently, if the node is found, executing the task on the node, otherwise, adding the task into a waiting queue;
5-2) if a certain task is completed, releasing the resources occupied by the task and updating the available resources of the node where the task is located;
5-3) judging whether the task queue is empty, if not, returning to the step 5) to enter the scheduling of the next task; if the task queue is empty, returning to the step 1), and continuing the next scheduling cycle;
the method for merging the task queues in the step 2) is to merge the task newly submitted by the current user with the waiting task which is not scheduled and completed before, and take the merged tasks as the scheduling objects of the current scheduling period;
when calculating the resources required by each task in the step 3), if the task is an analysis type task, the required resources are the minimum resources for ensuring that the task is completed within the specified time, the required resources are the ratio of the calculated amount of the task to the time difference between the current time and the task Deadline, and for the same analysis task, the longer the waiting time is, the more the required resources are; if the task is an interactive task, the required resource is the ratio of the calculated amount of the task to the time difference from the current time to the task Deadline.
2. The cost-aware task scheduling method in a cloud environment according to claim 1, wherein: in the step 5), the demand of the task on computing resources on the cloud platform is firstly determined when the task is scheduled and executed, and only a server with enough computing resources can execute the corresponding task, so that the interruption of the task due to insufficient resources in the executing process is avoided as much as possible; in the task allocation process, the task with the minimum resource demand is allocated preferentially, so that the failed task can be ensured to be the minimum, and the task processing cost is reduced.
3. The cost-aware task scheduling method in a cloud environment according to claim 1, wherein: in step 5), when a certain server releases part of available resources, immediately updating the available resources of the server to ensure more available resources in the next task allocation process; if no task to be scheduled can be allocated to the server for execution, the newly released resource is used for executing the task currently being processed, the computing resource of the cloud platform is fully utilized, the task execution speed is accelerated, and more computing resources are released.
4. The cost-aware task scheduling method in a cloud environment according to claim 1, wherein: in the task scheduling process, as the waiting time of each waiting task increases, the required resources of each waiting task gradually increase, so that the minimum resource demand of each waiting task must be recalculated during task allocation, as shown in step 3), so as to ensure that the tasks can be completed within the time threshold; if the minimum resource demand of a certain waiting state task exceeds the maximum resource demand of each server in the cloud platform, terminating the task and informing a user, as shown in step 5-1).
CN201711205255.1A 2017-11-27 2017-11-27 Task scheduling method for cost perception in cloud environment Active CN107992359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711205255.1A CN107992359B (en) 2017-11-27 2017-11-27 Task scheduling method for cost perception in cloud environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711205255.1A CN107992359B (en) 2017-11-27 2017-11-27 Task scheduling method for cost perception in cloud environment

Publications (2)

Publication Number Publication Date
CN107992359A CN107992359A (en) 2018-05-04
CN107992359B true CN107992359B (en) 2021-05-18

Family

ID=62033407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711205255.1A Active CN107992359B (en) 2017-11-27 2017-11-27 Task scheduling method for cost perception in cloud environment

Country Status (1)

Country Link
CN (1) CN107992359B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110554912B (en) * 2018-05-31 2022-05-20 杭州海康威视数字技术股份有限公司 Method and device for scheduling equipment to execute tasks
CN108874538B (en) * 2018-05-31 2020-10-13 合肥本源量子计算科技有限责任公司 Scheduling server, scheduling method and application method for scheduling quantum computer
CN109491798A (en) * 2018-11-27 2019-03-19 郑州云海信息技术有限公司 A kind of method and device of resource allocation
CN109710392B (en) * 2018-12-21 2023-08-01 万达信息股份有限公司 Heterogeneous resource scheduling method based on hybrid cloud
CN110297694A (en) * 2019-07-05 2019-10-01 桂林理工大学 A kind of heuristic cloud computing dispatching method based on resource classification and task priority
CN110750350B (en) * 2019-10-29 2022-08-16 广东浪潮大数据研究有限公司 Large resource scheduling method, system, device and readable storage medium
CN111090507B (en) * 2019-11-25 2023-06-09 南京航空航天大学 Task scheduling method and application based on cloud edge fusion server network architecture
CN111158879B (en) * 2019-12-31 2024-03-22 上海依图网络科技有限公司 Scheduling method, device, machine-readable medium and system for system resources
CN111160810A (en) * 2020-01-09 2020-05-15 中国地质大学(武汉) Workflow-based high-performance distributed spatial analysis task scheduling method and system
CN113742053A (en) * 2020-05-29 2021-12-03 中国电信股份有限公司 Container resource allocation method and device
CN111679900B (en) * 2020-06-15 2023-10-31 杭州海康威视数字技术股份有限公司 Task processing method and device
CN112346863B (en) * 2020-10-28 2024-06-07 河北冀联人力资源服务集团有限公司 Method and system for processing dynamic adjustment data of computing resources
CN112579268A (en) * 2020-12-05 2021-03-30 西安翔腾微电子科技有限公司 Efficient out-of-order unified dyeing array multi-task parallel processing method
CN114726869A (en) * 2022-04-02 2022-07-08 中国建设银行股份有限公司 Resource management method and device, storage medium and electronic equipment
CN117407178B (en) * 2023-12-14 2024-04-02 成都凯迪飞研科技有限责任公司 Acceleration sub-card management method and system for self-adaptive load distribution
CN117785487B (en) * 2024-02-27 2024-05-24 融科联创(天津)信息技术有限公司 Method, device, equipment and medium for scheduling computing power resources

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104852860A (en) * 2015-05-04 2015-08-19 四川大学 Queue-based multi-target scheduling strategy for heterogeneous resources
CN106874112A (en) * 2017-01-17 2017-06-20 华南理工大学 A kind of workflow earth-filling method of combination load balancing
CN107045456A (en) * 2016-02-05 2017-08-15 华为技术有限公司 A kind of resource allocation methods and explorer

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090077561A1 (en) * 2007-07-05 2009-03-19 International Business Machines Corporation Pipeline Processing Method and Apparatus in a Multi-processor Environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104852860A (en) * 2015-05-04 2015-08-19 四川大学 Queue-based multi-target scheduling strategy for heterogeneous resources
CN107045456A (en) * 2016-02-05 2017-08-15 华为技术有限公司 A kind of resource allocation methods and explorer
CN106874112A (en) * 2017-01-17 2017-06-20 华南理工大学 A kind of workflow earth-filling method of combination load balancing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Energy efficient scheduling of virtual machines in cloud with deadline;Youwei Ding et al.;《Future Generation Computer Systems》;20150930;全文 *

Also Published As

Publication number Publication date
CN107992359A (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN107992359B (en) Task scheduling method for cost perception in cloud environment
WO2020181813A1 (en) Task scheduling method based on data processing and related device
CN111427679B (en) Computing task scheduling method, system and device for edge computing
US8185908B2 (en) Dynamic scheduling in a distributed environment
US7076781B2 (en) Resource reservation for large-scale job scheduling
CN111399989B (en) Container cloud-oriented task preemption and scheduling method and system
CN111381950A (en) Task scheduling method and system based on multiple copies for edge computing environment
US20100125847A1 (en) Job managing device, job managing method and job managing program
CN109857535B (en) Spark JDBC-oriented task priority control implementation method and device
WO2024021489A1 (en) Task scheduling method and apparatus, and kubernetes scheduler
CN105373426B (en) A kind of car networking memory aware real time job dispatching method based on Hadoop
CN111782627B (en) Task and data cooperative scheduling method for wide-area high-performance computing environment
CN116467076A (en) Multi-cluster scheduling method and system based on cluster available resources
CN115237568A (en) Mixed weight task scheduling method and system for edge heterogeneous equipment
CN109491775A (en) Task processing and dispatching method under a kind of environment for edge calculations
CN106407007B (en) Cloud resource configuration optimization method for elastic analysis process
CN110048966B (en) Coflow scheduling method for minimizing system overhead based on deadline
CN106201681A (en) Task scheduling algorithm based on pre-release the Resources list under Hadoop platform
CA2631255A1 (en) Scalable scheduling of tasks in heterogeneous systems
CN110084507A (en) The scientific workflow method for optimizing scheduling of perception is classified under cloud computing environment
CN111061565A (en) Two-stage pipeline task scheduling method and system in Spark environment
CN112306642B (en) Workflow scheduling method based on stable matching game theory
CN110262896A (en) A kind of data processing accelerated method towards Spark system
CN111930485B (en) Job scheduling method based on performance expression
CN106534312B (en) A kind of service request selection of facing mobile apparatus and dispatching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant