CN115794323A - Task scheduling method, device, server and storage medium - Google Patents

Task scheduling method, device, server and storage medium Download PDF

Info

Publication number
CN115794323A
CN115794323A CN202111064231.5A CN202111064231A CN115794323A CN 115794323 A CN115794323 A CN 115794323A CN 202111064231 A CN202111064231 A CN 202111064231A CN 115794323 A CN115794323 A CN 115794323A
Authority
CN
China
Prior art keywords
tasks
task
scheduling
energy consumption
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111064231.5A
Other languages
Chinese (zh)
Inventor
邓凌越
黄晨昱
罗志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datang Mobile Communications Equipment Co Ltd
Original Assignee
Datang Mobile Communications Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datang Mobile Communications Equipment Co Ltd filed Critical Datang Mobile Communications Equipment Co Ltd
Priority to CN202111064231.5A priority Critical patent/CN115794323A/en
Publication of CN115794323A publication Critical patent/CN115794323A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a task scheduling method, a task scheduling device, a server and a storage medium, and relates to the technical field of communication. The specific implementation scheme is as follows: the method comprises the steps of obtaining a plurality of tasks to be executed, sequencing the tasks according to the priority levels of the tasks, scheduling the tasks according to a sequencing interval where a sequencing sequence is located, and scheduling each task in the same sequencing interval to different virtual machines. According to the method, under the condition that the priorities of the tasks are guaranteed, the tasks with similar priorities are dispersed to the virtual machines, the load balance degree on the virtual machines is improved, the resource utilization rate is improved, meanwhile, the task scheduling is comprehensively designed from multiple angles such as the priorities, comprehensive optimization of the task scheduling is achieved, the processing time is shortened, and the energy consumption is reduced.

Description

Task scheduling method, device, server and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a task scheduling method, apparatus, server, and storage medium.
Background
With the continuous development of communication technology, the requirement on communication quality is higher and higher. In order to reduce communication delay and realize integrated converged networking, an edge computing architecture is required to be used for reconstructing a mobile core network. When the Mobile Edge Computing (MEC) server has insufficient resources for processing the compute-intensive task, an effective task scheduling policy needs to be implemented to improve the resource utilization ratio, so as to reduce the processing time.
Disclosure of Invention
The application provides a task scheduling method, a task scheduling device and a task scheduling storage medium.
According to a first aspect of the present application, a task scheduling method is provided, which is applied to a server, and the method includes:
acquiring a plurality of tasks to be executed;
according to the priorities of the tasks, the tasks are sorted according to the priorities;
and scheduling the plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
Optionally, the scheduling the plurality of tasks according to the sorting interval in which the sorting order is located, and scheduling each task in the same sorting interval to a different virtual machine includes: generating constraint conditions according to each sequencing interval; determining an objective function according to processing time delay and energy consumption, wherein the processing time delay is the processing time delay of each task when the tasks are executed on each virtual machine respectively, and the energy consumption is the energy consumption of terminal equipment for generating the tasks; under the constraint condition, solving the objective function to obtain a scheduling matrix; and determining the virtual machine to which each task is scheduled according to the scheduling matrix.
Optionally, the determining an objective function according to the processing delay and the energy consumption includes: determining the total processing time delay of the virtual machine which completes the task finally according to the processing time delay of each task when being executed on each virtual machine; determining total energy consumption according to the energy consumption of the terminal equipment generating each task; and weighting and summing the total processing time delay and the total energy consumption to obtain the objective function.
Optionally, under the constraint condition, solving the objective function to obtain a scheduling matrix includes: solving the objective function by using a branch-and-bound method to obtain a scheduling matrix meeting the constraint condition; wherein the scheduling matrix minimizes a function value of the objective function.
Optionally, before the sorting the plurality of tasks according to the priorities of the plurality of tasks, the method further includes: and calculating the priorities of the tasks according to at least one of the user authority category of each terminal device generating each task, the urgency degree of each task and the calculation amount of each task.
According to a second aspect of the present application, there is provided a server comprising a memory, a transceiver, and a processor;
a memory for storing a computer program; a transceiver for transceiving data under control of the processor; a processor for reading the computer program in the memory and performing the following operations:
acquiring a plurality of tasks to be executed;
according to the priorities of the tasks, the tasks are sorted according to the priorities;
and scheduling the plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
Optionally, the scheduling the plurality of tasks according to the sorting interval in which the sorting order is located, and scheduling each task in the same sorting interval to a different virtual machine includes: generating constraint conditions according to each sequencing interval; determining an objective function according to processing time delay and energy consumption, wherein the processing time delay is the processing time delay when each task is executed on each virtual machine, and the energy consumption is the energy consumption of terminal equipment for generating the tasks; under the constraint condition, solving the objective function to obtain a scheduling matrix; and determining the virtual machine to which each task is scheduled according to the scheduling matrix.
Optionally, the determining an objective function according to the processing delay and the energy consumption includes: determining the total processing time delay of the virtual machine which completes the task finally according to the processing time delay of each task when being executed on each virtual machine; determining total energy consumption according to the energy consumption of the terminal equipment generating each task; and weighting and summing the total processing time delay and the total energy consumption to obtain the objective function.
Optionally, under the constraint condition, solving the objective function to obtain a scheduling matrix includes: solving the objective function by using a branch-and-bound method to obtain a scheduling matrix meeting the constraint condition; wherein the scheduling matrix minimizes a function value of the objective function.
Optionally, before the sorting the plurality of tasks according to the priorities of the plurality of tasks, the method further includes: and calculating the priorities of the tasks according to at least one of the user authority category of each terminal device generating each task, the urgency degree of each task and the calculation amount of each task.
According to a third aspect of the present application, there is provided a task scheduling apparatus comprising:
an acquisition unit configured to acquire a plurality of tasks to be executed;
the sequencing unit is used for sequencing the tasks according to the priorities of the tasks;
and the scheduling unit is used for scheduling the plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
According to a fourth aspect of the present application, there is provided a processor-readable storage medium having stored thereon a computer program for causing a processor to execute the task scheduling method of the first aspect.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
according to the method and the system, the server obtains a plurality of tasks to be executed, the tasks are sorted according to the priorities of the tasks, the tasks are scheduled according to a sorting interval where a sorting sequence is located, and the tasks in the same sorting interval are scheduled to different virtual machines. Therefore, the method enables each task in the same sequencing interval to be scheduled to different virtual machines, enables the tasks with similar priorities to be dispersed to each virtual machine under the condition of ensuring the priorities of the tasks, improves the load balance degree on the virtual machines, improves the resource utilization rate, and meanwhile, the method comprehensively designs task scheduling from multiple angles such as the priorities, realizes comprehensive optimization of the task scheduling, reduces the processing time and reduces the energy consumption.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a task scheduling method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another task scheduling method provided in an embodiment of the present application;
FIG. 3 is a diagram illustrating a process for solving an objective function under constraint conditions according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a server provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a task scheduling apparatus according to an embodiment of the present application.
Detailed Description
In the embodiment of the present application, the term "and/or" describes an association relationship of associated objects, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the embodiments of the present application, the term "plurality" means two or more, and other terms are similar thereto.
In the related art, the task scheduling is mainly researched to reduce the maximum completion time, improve the resource utilization rate, reduce energy consumption, ensure load balance and the like, the priority requirements of tasks are often ignored, the priorities of different users and different tasks have high and low scores, and the tasks with high priorities should be processed preferentially.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a task scheduling method, a task scheduling device, a server and a storage medium, improves the load balance degree of a virtual machine, and realizes comprehensive optimization of task scheduling.
Fig. 1 is a flowchart illustrating a task scheduling method according to an embodiment of the present application. It should be noted that the method is executed by a server.
As shown in fig. 1, a task scheduling method according to an embodiment of the present application mainly includes the following steps:
step S101, a plurality of tasks to be executed are acquired.
The task is sent to the server by the terminal device and needs the server to process calculation. Different tasks may be generated by different terminal devices or by different users of one terminal device. At a certain time or within a certain period of time, the server may receive a plurality of tasks sent from different terminal devices.
And S102, sequencing the tasks according to the priorities of the tasks.
The priority refers to the priority level of the task to be processed, and the task with high priority is processed preferentially. Different tasks may have different priorities due to their urgency, user privileges, time required to be processed, and the like.
Alternatively, the priority of a task may be calculated according to at least one of the user permission category of the terminal device that generates the task, the urgency of the task, and the amount of calculation of the task.
It can be understood that different factors can be selected according to different scene requirements by selecting the factor for calculating the task priority, and different weight values can be set for different factors to obtain a more accurate priority.
And step S103, scheduling a plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
The sequencing interval is a sequencing interval calculated by sequencing a certain number of sequences in sequence according to the sequencing sequence of the tasks. For example, the first to tenth ordering intervals are the first ordering interval, the eleventh to twentieth ordering intervals are the second ordering interval, and so on.
As mentioned in step S101, the server may receive multiple tasks at a certain time or within a certain period of time, and in this step, all the tasks received at the certain time or within the certain period of time may be sorted according to the priority, and the obtained multiple tasks are scheduled according to the sorting interval, so that each task in the same sorting interval is scheduled to a different virtual machine. The task scheduling method and the task scheduling device avoid that tasks with similar priorities are scheduled to the same virtual machine in a plurality of tasks received at the same time or in the same time period.
It should be noted that, in order to schedule each task in the same sorting interval to a different virtual machine, the number of tasks in the sorting interval is less than or equal to the number of virtual machines.
Optionally, in a possible implementation, the number of tasks within the ordering interval is equal to the number of virtual machines.
According to the task scheduling method, the tasks to be executed are acquired, the tasks are ranked according to the priorities of the tasks, the tasks are scheduled according to the ranking interval where the ranking sequence is located, the tasks in the same ranking interval are scheduled to different virtual machines, the tasks with similar priorities are dispersed to the virtual machines under the condition that the priorities of the tasks are guaranteed, load balance degree on the virtual machines is improved, meanwhile, task scheduling is designed comprehensively from multiple angles such as the priorities, and comprehensive optimization of task scheduling is achieved.
Fig. 2 is a flowchart illustrating another task scheduling method according to an embodiment of the present application. It should be noted that the method is executed by a server.
As shown in fig. 2, another task scheduling method according to the embodiment of the present application mainly includes the following steps:
in step S201, a plurality of tasks to be executed are acquired.
Optionally, the server obtains n tasks to be executed, where the task set T = { T = { (T) 1 ,T 2 ,…,T a ,…,T n Where T is a Representing the task to be performed, a =1,2, …, n.
Optionally, each task T a There are a set of parameters: t is a ={R a ,U a ,D a In which R is a Representation generating task T a The user authority category of the terminal device, set R a ={k|k∈[1,5],k∈Z},U a Representing a task T a The degree of urgency of is set as U a ={k|k∈[1,10],k∈Z},D a Representing a task T a The amount of calculation of (a).
Step S202, according to the priorities of the tasks, the tasks are sorted according to the priorities.
As a possible implementation, the task T is generated according to a User authority category and task T of terminal equipment a Degree of urgency of and task T a The calculated amount of (2), the calculation task T a The priority of (2):
P a =ε 1 ×r a2 ×u a3 ×d a
wherein r is a 、u a 、d a User authority classes R respectively representing terminal devices generating tasks a Degree of urgency of task U a And the calculated amount of the task D a Normalized value of a ∈ [1,n],ε 1 ,ε 2 ,ε 3 ∈[0,1]Is a weight factor, and e 123 =1。
Then, the n tasks are sequenced according to the priority levels to obtain a sequenced task set T' = { T = } 1 ,T 2 ,…,T i ,…,T n }. Wherein, T i Indicating the task with priority ranking as ith, i =1,2, …, n.
In step S203, constraint conditions are generated according to the respective sorting intervals.
Optionally, the server has m virtual machines to process the n tasks, with V = { V = { (V) 1 ,V 2 ,…,V j ,…,V m Denotes a set of virtual machines, virtual machine V j Has a computing power of C j . The n tasks are dispatched to the m virtual machines for processing, and the mapping relation between the tasks and the virtual machines is represented by a dispatching matrix s, namely the virtual machines to which the tasks are dispatched can be determined according to the dispatching matrix s. For each element s in the scheduling matrix s ij The method comprises the following steps:
Figure BDA0003257715060000071
as a possible implementation manner, the number of tasks in the sorting interval is the number of virtual machines.
Thus, the number of tasks in the sorting interval is m, i.e. the 1 st to mth sorting intervals are the first sorting intervals, the m +1 nd to 2 mth sorting intervals are the second sorting intervals, and so on.
Thus, the constraint condition is generated:
Figure BDA0003257715060000072
where s.t. represents the constraint, r = nmodm, mod represents the remainder, i.e. r is the remainder of n divided by m.
It should be noted that, in the above-mentioned constraint conditions,
Figure BDA0003257715060000073
representing a scheduling matrix s ij Each row has only one 1, that is, for each task, the task is executed by one and only one virtual machine, and one task is not scheduled to be executed by two virtual machines and is not executed by no virtual machine;
Figure BDA0003257715060000074
representing a scheduling matrix s ij There is only one 1 in each column from row 1 to row m, that is, for the 1 st to row m tasks, each task is scheduled to a different virtual machine, that is, the task in the first sequencing interval is scheduled to a different virtual machine;
Figure BDA0003257715060000075
representing a scheduling matrix s ij There is only one 1 in each column of the (m + 1) th to 2m th rows, that is, for the (m + 1) th to 2 m-th tasks, each task is scheduled to a different virtual machine, that is, the tasks in the second ordering interval are scheduled to different virtual machines; by the way of analogy, the method can be used,
Figure BDA0003257715060000076
the task representing the last ordering interval may be scheduled to a different virtual machine.
It can be understood that the constraint conditions may ensure that, when scheduling is performed according to the scheduling matrix, each task in the same sequencing interval may be scheduled to a different virtual machine, that is, two tasks in the same sequencing interval may not be executed on one virtual machine.
Step S204, according to the processing time delay of each task when being executed on each virtual machine, determining the total processing time delay of the virtual machine which finishes the task finally.
Optionally, the processing delay of the task when executed on the virtual machine includes a transmission delay and a computation delay.
Task T i In virtual machine V j The calculated time delay of (d) is:
Figure BDA0003257715060000081
wherein D is i Representing the amount of computation of the task, C j Representing the computing power of the virtual machine.
Task T in a bandwidth-limited, noisy interference channel environment i The transmission rate transmitted by the terminal device to the edge computing server in the communication channel is:
Figure BDA0003257715060000082
where B is the network bandwidth, σ is the noise power, h is the channel power gain, and P is the transmission power.
Thus, task T i The transmission time to the edge server is:
Figure BDA0003257715060000083
then task T i In virtual machine V j The processing time delay when the system is executed is as follows:
Figure BDA0003257715060000084
thus, the total processing latency of the virtual machine that results in the last completed task can be expressed as:
Figure BDA0003257715060000085
it can be understood that, because each virtual machine in the server executes each task simultaneously, the total processing latency of all tasks processed on the virtual machine in the server is the total processing latency of the virtual machine that completes the task last, and therefore, the above expression represents the latency of the virtual machine in the server completing all tasks.
Step S205, determining total energy consumption according to the energy consumption of the terminal equipment generating each task.
Optionally, the energy consumption of the terminal device generating the task includes transmission energy consumption and waiting energy consumption. The transmission energy consumption refers to energy consumption generated by the terminal device during task transmission, and the waiting energy consumption refers to energy consumption generated by the terminal device during the task processing and calculation by the virtual machine.
Thus, task T is generated i The energy consumption of the terminal equipment is as follows:
Figure BDA0003257715060000086
wherein, P ir Representing the transmission power of the terminal device; p ie Indicating the idle power of the terminal device.
Then, the total energy consumption can be expressed as:
Figure BDA0003257715060000091
and step S206, carrying out weighted summation on the total processing time delay and the total energy consumption to obtain an objective function.
Optionally, according to the total processing delay obtained in step S204 and the total energy consumption obtained in step S205, in order to optimize the scheduling, and with the goal of simultaneously reducing the total processing delay and the total energy consumption, an objective function is obtained as follows:
Figure BDA0003257715060000092
wherein λ is a weight coefficient of processing delay and energy consumption. It can be understood that the value of λ may vary according to the requirements of the scene, and λ ∈ [0,1].
And step S207, solving the objective function under the constraint condition to obtain a scheduling matrix.
Optionally, under the constraint condition in step S203, the objective function in step S206 is solved.
As a possible implementation, the objective function is solved by using a branch definition method to obtain a scheduling matrix meeting the constraint condition.
Optionally, in order to reduce the difficulty of solving the objective function, the objective function and the constraint condition are first converted into a mixed integer linear programming problem:
order to
Figure BDA0003257715060000093
The objective function is then:
Figure BDA0003257715060000094
the constraint conditions are as follows:
Figure BDA0003257715060000095
the mixed integer linear programming problem is then solved by branch-and-bound method. In order to reduce the complexity of the algorithm calculation solution, firstly, linear programming pretreatment is carried out on the problem, redundant constraint conditions are removed, the problem scale is reduced, then, the integer constraint of variables is not considered, and whether the integer solution is continuously solved is determined by judging whether the relaxation problem is feasible or not. The optimal solution is then systematically searched using branch-and-bound algorithms. For ease of understanding, the process of solving the objective function under the constraints is shown in FIG. 3.
Optionally, in order to reduce the computational complexity and improve the computational efficiency, the original problem is first transformed, and the process is as follows:
firstly, linear programming preprocessing is carried out on the problem, and redundant constraint conditions are removed.
And then under the condition of not considering the integer constraint of the variable, firstly judging whether the preprocessed relaxation problem has a feasible solution or not, if so, judging whether the problem has an integer solution or not, otherwise, indicating that the original problem has no feasible solution, and ending the solution.
If the problem has an integer solution, the integer solution is the optimal solution, and if the problem has no integer solution, the optimal solution is searched by using a branch definition method.
The process of searching the optimal solution by the branch definition method comprises the following steps:
a. a root node is generated.
b. And finding a feasible solution of the original problem.
c. And judging whether the feasible solution is the optimal solution, if so, determining that the feasible solution is the searched optimal solution, ending the solution, and if not, executing d.
d. And (e) judging whether the solution reaches a convergence condition, if so, determining the solution as an optimal solution, finishing solving, and if not, executing e.
e. And (c) judging whether the preset calculation time is exceeded or not, if so, taking the solution as an approximate solution, finishing solving, and if not, executing the step b.
And step S208, scheduling a plurality of tasks according to the scheduling matrix.
Optionally, after finding the optimal solution, scheduling the plurality of tasks according to the optimal scheduling matrix obtained by solving, and scheduling the tasks to the corresponding virtual machines for processing.
According to the task scheduling method, a plurality of tasks to be executed are obtained, the tasks are ranked according to the priorities of the tasks, constraint conditions are generated according to each ranking interval, the total processing time delay of the virtual machine which finishes the task at last is determined according to the processing time delay of each task when each task is executed on each virtual machine, the total energy consumption is determined according to the energy consumption of terminal equipment which generates each task, the total processing time delay and the total energy consumption are subjected to weighted summation to obtain an objective function, the objective function is solved under the constraint conditions to obtain a scheduling matrix, and the tasks are scheduled according to the scheduling matrix. The method can enable tasks with similar priorities in the same sequencing interval to be scheduled to different virtual machines for processing, enables the tasks with similar priorities to be dispersed to each virtual machine under the condition of ensuring the priorities of the tasks, improves the load balance degree on the virtual machines, and meanwhile comprehensively designs task scheduling from multiple angles such as multi-factor time delay, energy consumption, load balance, task priorities and the like, so that comprehensive optimization of task scheduling is realized, the calculation complexity is reduced when the optimal solution of a target function is solved, and the processing efficiency is improved.
In order to implement the foregoing embodiment, an embodiment of the present application further provides a server, and fig. 4 is a schematic structural diagram of a server provided in the embodiment of the present application.
As shown in fig. 4, the server includes: memory 401, transceiver 402, and processor 403.
A memory 401 for storing a computer program; a transceiver 402 for transceiving data under the control of the processor; a processor 403 for reading the computer program in the memory and performing the following operations:
acquiring a plurality of tasks to be executed;
according to the priorities of the tasks, the tasks are sorted according to the priorities;
and scheduling the plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
As a possible implementation manner, scheduling a plurality of tasks according to a sorting interval in which a sorting order is located, and scheduling each task in the same sorting interval to different virtual machines includes: generating constraint conditions according to each sequencing interval; determining an objective function according to processing time delay and energy consumption, wherein the processing time delay is the processing time delay when each task is executed on each virtual machine, and the energy consumption is the energy consumption of terminal equipment for generating the tasks; under the constraint condition, solving the objective function to obtain a scheduling matrix; and determining the virtual machine to which each task is scheduled according to the scheduling matrix.
As a possible implementation, determining the objective function according to the processing delay and the energy consumption includes: determining the total processing time delay of the virtual machine which completes the task finally according to the processing time delay of each task when being executed on each virtual machine; determining total energy consumption according to the energy consumption of the terminal equipment generating each task; and carrying out weighted summation on the total processing time delay and the total energy consumption to obtain an objective function.
As a possible implementation, solving the objective function under the constraint condition to obtain a scheduling matrix includes: solving the objective function by using a branch-and-bound method to obtain a scheduling matrix meeting constraint conditions; wherein the scheduling matrix minimizes the function value of the objective function.
As a possible implementation manner, before the sorting the plurality of tasks according to the priorities, the method further includes: and calculating the priorities of the tasks according to at least one of the user authority categories of the terminal devices generating the tasks, the urgency degree of the tasks and the calculation amount of the tasks.
As a possible implementation, the constraint is: a sub-matrix of the scheduling matrix is a permutation matrix; the sub-matrix is a matrix formed by tasks and virtual machines in any sequencing interval.
It should be noted that, the server provided in the embodiment of the present application can implement all the method steps implemented by the method embodiments in fig. 1 to fig. 2, and can achieve the same technical effect, and details of the same parts and beneficial effects as those of the method embodiments in this embodiment are not repeated herein.
In order to implement the foregoing embodiments, an embodiment of the present application further provides a task scheduling device. The task scheduling device is disposed in a server, and fig. 5 is a schematic structural diagram of a task scheduling device according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus includes: an acquisition unit 501, a sorting unit 502 and a scheduling unit 503.
The acquiring unit 501 is configured to acquire a plurality of tasks to be executed;
a sorting unit 502, configured to sort, according to priorities of the plurality of tasks, the plurality of tasks according to the priorities;
a scheduling unit 503, configured to schedule the multiple tasks according to the sorting interval in which the sorting order is located, and schedule each task in the same sorting interval to a different virtual machine.
As a possible implementation manner, the scheduling unit 503 is specifically configured to: generating constraint conditions according to each sequencing interval; determining an objective function according to processing time delay and energy consumption, wherein the processing time delay is the processing time delay when each task is executed on each virtual machine, and the energy consumption is the energy consumption of terminal equipment for generating the tasks; under the constraint condition, solving the objective function to obtain a scheduling matrix; and determining the virtual machine to which each task is scheduled according to the scheduling matrix.
As a possible implementation manner, the scheduling unit 503 is specifically configured to: determining the total processing time delay of the virtual machine which finishes the task finally according to the processing time delay of each task when being executed on each virtual machine; determining total energy consumption according to the energy consumption of the terminal equipment generating each task; and carrying out weighted summation on the total processing time delay and the total energy consumption to obtain an objective function.
As a possible implementation manner, the scheduling unit 503 is further specifically configured to: solving the objective function by using a branch-and-bound method to obtain a scheduling matrix meeting constraint conditions; wherein the scheduling matrix minimizes the function value of the objective function.
As a possible implementation manner, before the sorting the plurality of tasks according to the priorities, the method further includes: and calculating the priorities of the tasks according to at least one of the user authority categories of the terminal devices generating the tasks, the urgency degree of the tasks and the calculation amount of the tasks.
As a possible implementation, the constraint is: the sub-matrix of the scheduling matrix is a permutation matrix; the submatrix is a matrix formed by tasks and virtual machines in any sequencing interval.
It should be noted that, the task scheduling apparatus provided in the embodiment of the present application can implement all the method steps implemented by the method embodiments of fig. 1 to fig. 2, and can achieve the same technical effect, and details of the same parts and beneficial effects as those of the method embodiments in this embodiment are not repeated herein.
It should be noted that, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a processor readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network-side device, etc.) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The present application further provides a processor-readable storage medium according to an embodiment of the present application.
The processor-readable storage medium stores a computer program for causing the processor to execute the task scheduling method according to the embodiment of fig. 1-2.
The processor-readable storage medium can be any available medium or data storage device that can be accessed by a processor, including but not limited to magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), solid State Disks (SSDs)), etc.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer-executable instructions. These computer-executable instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be stored in a processor-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the processor-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments are not intended to limit the scope of the present disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A task scheduling method is applied to a server, and comprises the following steps:
acquiring a plurality of tasks to be executed;
according to the priorities of the tasks, the tasks are sorted according to the priorities;
and scheduling the plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
2. The method according to claim 1, wherein the scheduling the plurality of tasks according to the sorting interval in which the sorting order is located, and the scheduling each task in the same sorting interval to a different virtual machine includes:
generating constraint conditions according to each sequencing interval;
determining an objective function according to processing time delay and energy consumption, wherein the processing time delay is the processing time delay when each task is executed on each virtual machine, and the energy consumption is the energy consumption of terminal equipment for generating the tasks;
under the constraint condition, solving the objective function to obtain a scheduling matrix;
and determining the virtual machine to which each task is scheduled according to the scheduling matrix.
3. The method of claim 2, wherein determining the objective function according to processing delays of the tasks executed on the virtual machines respectively and according to energy consumption of terminal devices generating the tasks comprises:
determining the total processing time delay of the virtual machine which completes the task finally according to the processing time delay of each task when being executed on each virtual machine;
determining total energy consumption according to the energy consumption of the terminal equipment generating each task;
and weighting and summing the total processing time delay and the total energy consumption to obtain the objective function.
4. The method according to any one of claims 1-3, wherein solving the objective function under the constraint to obtain a scheduling matrix comprises:
solving the objective function by using a branch-and-bound method to obtain a scheduling matrix meeting the constraint condition;
wherein the scheduling matrix minimizes a function value of the objective function.
5. The method of claim 1, wherein prior to said sorting the plurality of tasks by priority of the plurality of tasks according to the priority, further comprising:
and calculating the priorities of the tasks according to at least one of the user authority category of each terminal device generating each task, the urgency degree of each task and the calculation amount of each task.
6. The method of claim 2, wherein the constraint is:
the sub-matrix of the scheduling matrix is a permutation matrix;
and the sub-matrix is a matrix formed by tasks in any sequencing interval and the virtual machine.
7. A server comprising a memory, a transceiver, and a processor;
a memory for storing a computer program; a transceiver for transceiving data under control of the processor; a processor for reading the computer program in the memory and performing the following operations:
acquiring a plurality of tasks to be executed;
according to the priorities of the tasks, the tasks are sorted according to the priorities;
and scheduling the plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
8. The server according to claim 7, wherein the scheduling the plurality of tasks according to the sorting interval in which the sorting order is located, and the scheduling each task in the same sorting interval to a different virtual machine includes:
generating constraint conditions according to each sequencing interval;
determining an objective function according to processing time delay and energy consumption, wherein the processing time delay is the processing time delay when each task is executed on each virtual machine, and the energy consumption is the energy consumption of terminal equipment for generating the tasks;
under the constraint condition, solving the objective function to obtain a scheduling matrix;
and determining the virtual machine to which each task is scheduled according to the scheduling matrix.
9. The server according to claim 8, wherein the determining an objective function according to processing latency and energy consumption comprises:
determining the total processing time delay of the virtual machine which completes the task finally according to the processing time delay of each task when being executed on each virtual machine;
determining total energy consumption according to the energy consumption of the terminal equipment generating each task;
and weighting and summing the total processing time delay and the total energy consumption to obtain the objective function.
10. The server according to claims 7-9, wherein solving the objective function under the constraint condition to obtain a scheduling matrix comprises:
solving the objective function by using a branch-and-bound method to obtain a scheduling matrix meeting the constraint condition;
wherein the scheduling matrix minimizes a function value of the objective function.
11. The server according to claim 7, wherein before the sorting the plurality of tasks according to the priorities of the plurality of tasks, further comprising:
and calculating the priorities of the tasks according to at least one of the user authority category of each terminal device generating each task, the urgency degree of each task and the calculation amount of each task.
12. The server according to claim 8, wherein the constraint is:
the sub-matrix of the scheduling matrix is a permutation matrix;
and the sub-matrix is a matrix formed by tasks in any sequencing interval and the virtual machine.
13. A task scheduling apparatus, comprising:
an acquisition unit configured to acquire a plurality of tasks to be executed;
the sequencing unit is used for sequencing the tasks according to the priorities of the tasks;
and the scheduling unit is used for scheduling the plurality of tasks according to the sorting interval in which the sorting sequence is positioned, and scheduling each task in the same sorting interval to different virtual machines.
14. A processor-readable storage medium, characterized in that the processor-readable storage medium stores a computer program for causing a processor to perform the method of any one of claims 1 to 6.
CN202111064231.5A 2021-09-10 2021-09-10 Task scheduling method, device, server and storage medium Pending CN115794323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111064231.5A CN115794323A (en) 2021-09-10 2021-09-10 Task scheduling method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111064231.5A CN115794323A (en) 2021-09-10 2021-09-10 Task scheduling method, device, server and storage medium

Publications (1)

Publication Number Publication Date
CN115794323A true CN115794323A (en) 2023-03-14

Family

ID=85416882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111064231.5A Pending CN115794323A (en) 2021-09-10 2021-09-10 Task scheduling method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN115794323A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151315A (en) * 2023-04-04 2023-05-23 之江实验室 Attention network scheduling optimization method and device for on-chip system
CN117319505A (en) * 2023-11-30 2023-12-29 天勰力(山东)卫星技术有限公司 Satellite task order-robbing system facing software-defined satellite shared network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116151315A (en) * 2023-04-04 2023-05-23 之江实验室 Attention network scheduling optimization method and device for on-chip system
CN116151315B (en) * 2023-04-04 2023-08-15 之江实验室 Attention network scheduling optimization method and device for on-chip system
CN117319505A (en) * 2023-11-30 2023-12-29 天勰力(山东)卫星技术有限公司 Satellite task order-robbing system facing software-defined satellite shared network
CN117319505B (en) * 2023-11-30 2024-02-06 天勰力(山东)卫星技术有限公司 Satellite task order-robbing system facing software-defined satellite shared network

Similar Documents

Publication Publication Date Title
Wang et al. Fast adaptive task offloading in edge computing based on meta reinforcement learning
CN108885571B (en) Input of batch processing machine learning model
Hamad et al. Genetic-based task scheduling algorithm in cloud computing environment
US9020829B2 (en) Quality of service aware scheduling for composite web service workflows
Chen et al. Learning-based resource allocation in cloud data center using advantage actor-critic
US20220012089A1 (en) System for computational resource prediction and subsequent workload provisioning
CN115794323A (en) Task scheduling method, device, server and storage medium
US11514368B2 (en) Methods, apparatuses, and computing devices for trainings of learning models
CN109903100A (en) A kind of customer churn prediction technique, device and readable storage medium storing program for executing
Gupta et al. User defined weight based budget and deadline constrained workflow scheduling in cloud
CN113485833A (en) Resource prediction method and device
Nasr et al. Cloudlet scheduling based load balancing on virtual machines in cloud computing environment
CN110971683B (en) Service combination method based on reinforcement learning
CN112926090A (en) Service analysis method and device based on differential privacy
CN113434267B (en) Cloud computing workflow dynamic scheduling method, device, equipment and storage medium
CN113132471B (en) Cloud service budget optimization scheduling method, device, equipment and storage medium
CN114327925A (en) Power data real-time calculation scheduling optimization method and system
Wang et al. A Novel Coevolutionary Approach to Reliability Guaranteed Multi‐Workflow Scheduling upon Edge Computing Infrastructures
Kim et al. An allocation and provisioning model of science cloud for high throughput computing applications
CN111324444A (en) Cloud computing task scheduling method and device
Golmohammadi et al. Load balancing in local computational grids within resource allocation process
Singh et al. Score-based genetic algorithm for scheduling workflow applications in clouds
Pawiński et al. An efficient solution of the resource constrained project scheduling problem based on an adaptation of the developmental genetic programming
CN114443258B (en) Resource scheduling method, device, equipment and storage medium for virtual machine
CN108550385B (en) Exercise scheme recommendation method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination