CN114675953A - Resource dynamic scheduling method, device, equipment and computer readable storage medium - Google Patents

Resource dynamic scheduling method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN114675953A
CN114675953A CN202210319224.3A CN202210319224A CN114675953A CN 114675953 A CN114675953 A CN 114675953A CN 202210319224 A CN202210319224 A CN 202210319224A CN 114675953 A CN114675953 A CN 114675953A
Authority
CN
China
Prior art keywords
scheduling
resource
sequence
batch
schedulable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210319224.3A
Other languages
Chinese (zh)
Inventor
张佳伟
孙思清
张勇
石光银
蔡卫卫
高传集
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Cloud Information Technology Co Ltd
Original Assignee
Inspur Cloud Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Cloud Information Technology Co Ltd filed Critical Inspur Cloud Information Technology Co Ltd
Priority to CN202210319224.3A priority Critical patent/CN114675953A/en
Publication of CN114675953A publication Critical patent/CN114675953A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to the field of cloud computing, and discloses a resource dynamic scheduling method, a device, equipment and a computer readable storage medium, when the number of resource scheduling tasks of a user is greater than the number of adjustable computing nodes, the resource scheduling tasks are batched according to the number of the adjustable computing nodes, a simulated annealing algorithm is utilized to put the resource scheduling tasks of the batch into the adjustable computing nodes for scheduling to achieve the aim of optimizing that the global resource utilization rate of each adjustable computing node of the batch is the lowest, the resource scheduling tasks of each batch are globally optimized, the global optimization of cluster overall scheduling is realized, the scheduling difficulty is reduced, the scheduling efficiency is improved, the problem of load imbalance in the current cloud computing is solved, and on the basis, the multi-objective optimization problem in the scheduling can be solved by configuring an evaluation mode of an optimization target, the method is suitable for the automatic management requirement of the current cloud computing cluster.

Description

Resource dynamic scheduling method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of cloud computing, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for dynamically scheduling resources.
Background
With the expansion of the scale of the internet, mass data generated by a server promotes the rapid development of cloud computing, which refers to the unified management and scheduling of a large number of computing resources connected by a network to form a computing resource pool for providing services for users. One of the goals of cloud computing is to achieve efficient utilization of resources, and resource scheduling in a cloud computing environment is an extremely critical problem, and the problems of workload, frequency, scale and the like of resource allocation and task scheduling need to be considered. The resource scheduling problem is to allocate a certain number of tasks to appropriate nodes for execution, so that the total execution efficiency is the highest, and meanwhile, the load balance of the cluster nodes needs to be considered.
The traditional scheduling algorithm is often optimized by taking a single resource scheduling task as a unit, the most efficient scheduling of a cluster is difficult to realize, the load condition of cluster nodes cannot be considered from the overall perspective, the multi-target scheduling requirement of the cluster cannot be met, the scheduling effect is poor, and the optimal scheduling of the cluster cannot be realized; and the method for scheduling resources one by one has low scheduling efficiency and cannot adapt to the resource scheduling requirement of the current large-scale cluster.
Providing a more optimal resource scheduling scheme is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a method, a device and equipment for dynamically scheduling resources and a computer readable storage medium, so that global optimization of overall scheduling of a cluster is realized, and meanwhile, scheduling efficiency is improved.
In order to solve the above technical problem, the present application provides a method for dynamically scheduling resources, including:
when the number of resource scheduling tasks of a user is larger than the number of the adjustable computing nodes, the resource scheduling tasks are batched, so that the number of the resource scheduling tasks of each batch is not larger than the number of the adjustable computing nodes;
starting from the first batch of the resource scheduling tasks, applying a simulated annealing algorithm to optimize to obtain a scheduling strategy of the batch by taking the goal of achieving the lowest global resource utilization rate of each schedulable computing node of the batch after the resource scheduling tasks of the batch are put into the schedulable computing nodes for scheduling until the scheduling strategy of each batch is obtained;
executing the dispatching of each resource dispatching task according to the dispatching strategy of each batch;
in one scheduling policy, each schedulable computing node allocates at most one resource scheduling task.
Optionally, the batching the resource scheduling tasks to make the number of the resource scheduling tasks of each batch not greater than the number of the adjustable computation nodes includes:
analyzing the dependency relationship among the resource scheduling tasks according to the parameter information of the resource scheduling tasks, and sequencing the resource scheduling tasks according to the priority;
and according to the sequencing of each resource scheduling task, simultaneously ensuring that the resource scheduling tasks with the dependency relationship are distributed to different batches and the quantity of the resource scheduling tasks of each batch is not more than the quantity of the adjustable computing nodes, and batching each resource scheduling task.
Optionally, the resource scheduling tasks are batched, so that the number of the resource scheduling tasks in each batch is not greater than the number of the adjustable computation nodes, specifically:
and taking the remainder obtained by dividing the number of the resource scheduling tasks by the number of the adjustable computing nodes as the number of the resource scheduling tasks of the last batch, wherein the number of the resource scheduling tasks of the rest batches is equal to the number of the adjustable computing nodes.
Optionally, the objective function of the simulated annealing algorithm is as follows:
Q(S)=min{F(S)};
wherein the content of the first and second substances,
F(S)=a·FCPU+bFMem+c·FIO+d·FAff+e·FCons
Figure BDA0003571022140000021
Figure BDA0003571022140000022
Figure BDA0003571022140000023
Figure BDA0003571022140000024
Figure BDA0003571022140000031
q (S) is the objective function, F (S) is an evaluation function of a scheduling sequence generated in the simulated annealing algorithm, F (S)CPUIs the CPU load condition evaluation function corresponding to the scheduling sequence, a is the weight of the CPU load condition evaluation function, FMemIs the memory load condition evaluation function corresponding to the scheduling sequence, b is the weight of the memory load condition evaluation function, FIOIs the IO load evaluation function corresponding to the scheduling sequence, c is the weight of the IO load evaluation function, FAffAn affinity matching condition evaluation function between the resource scheduling task corresponding to the scheduling sequence and the adjustable calculation node, d is the weight of the affinity evaluation function, FConsE is a resource utilization rationality evaluation function corresponding to the scheduling sequence, and e is the weight of the resource utilization rationality function;
n is said adjustableCalculating the number of nodes, ci1CPU resources occupied by the resource scheduling task assigned to the ith schedulable computing node under the scheduling sequence, ci0CPU resources used for the ith of said scaleable compute node, CitThe total amount of CPU resources of the ith schedulable computing node;
mi1memory resources, m, occupied by the resource scheduling task allocated to the ith schedulable computing node under the scheduling sequencei0For the memory resources already used by the ith schedulable compute node, MitThe total amount of memory resources of the ith schedulable computing node;
Ii1IO resources occupied by the resource scheduling task allocated to the ith schedulable computing node under the scheduling sequence, Ii0IO resources already used for the ith of said scalable compute node, IitThe total IO resource amount of the ith schedulable computing node;
f1(Li1==Li0) An affinity match function, L, for the resource scheduling task assigned to the ith said schedulable computing node under said scheduling sequencei1Affinity, L, of the resource scheduling task assigned to the ith said schedulable computing node under said scheduling sequencei0Calculating the affinity of the node for the ith said regulable, when Li0Satisfy Li1When affinity is f1(Li1==Li0) When L is 0i0Does not satisfy Li1When affinity f1(Li1==Li0)=1;
f2(Xi,X0) A resource utilization rationality function, X, for the ith said adjustable computation node under said scheduling sequencei1Resource occupancy, X, of the resource scheduling task assigned to the ith said adjustable computation node under the scheduling sequencei0Calculating the total amount of resources of the node for the ith said schedulable degree, when Xi1<Xi0Time f2(Xi,X0) When X is equal to 0i1>Xi0Time f2(Xi,X0)=1。
Optionally, the optimizing by applying the simulated annealing algorithm to obtain the scheduling policy of the batch specifically includes:
starting from the initial scheduling sequence, in each iteration calculation, disturbing a previous scheduling sequence corresponding to the previous iteration times by using a preset disturbance method to obtain a current scheduling sequence corresponding to the current iteration times;
if the value of the evaluation function corresponding to the current scheduling sequence is smaller than the value of the evaluation function corresponding to the last scheduling sequence, taking the current iteration sequence as the last scheduling sequence corresponding to the next iteration number;
if the value of the evaluation function corresponding to the current scheduling sequence is larger than or equal to the value of the evaluation function corresponding to the last scheduling sequence, carrying out Metropolis judgment;
if the current scheduling sequence is judged by the Metropolis compared with the previous scheduling sequence, taking the current iteration sequence as the previous scheduling sequence corresponding to the next iteration time;
if the current scheduling sequence is not judged by the Metropolis compared with the previous scheduling sequence, taking the previous scheduling sequence as the previous scheduling sequence corresponding to the next iteration number;
if the current temperature corresponding to the current iteration times is greater than the termination temperature of the simulated annealing algorithm, multiplying the current temperature by an annealing coefficient to obtain the temperature of the next iteration times, and returning to the step of disturbing the last scheduling sequence corresponding to the last iteration times by using a preset disturbance method to obtain the current scheduling sequence corresponding to the current iteration times;
and if the current temperature corresponding to the current iteration times is less than or equal to the termination temperature, stopping iterative computation.
Optionally, after stopping the iterative computation, the method further includes:
if the resource occupation condition does not exceed the total resource amount of the schedulable computing nodes after the corresponding resource scheduling tasks are distributed to each schedulable computing node in the result scheduling sequence corresponding to the last iteration, taking the result scheduling sequence as the scheduling strategy of the resource scheduling tasks of the batch;
if the resource occupation situation exceeds the total resource quantity of the schedulable computing nodes after the resource scheduling task corresponding to the schedulable computing node is distributed in the result scheduling sequence, after the quantity of the schedulable computing nodes is reduced, returning to the step of batching the resource scheduling task until a first result scheduling sequence of the resource scheduling task of the redistributed batch is obtained; simultaneously, preemptive scheduling is started, a high-priority task in the resource scheduling tasks of the batch is exchanged with a low-priority task which runs at the schedulable computing node, the step of applying the simulated annealing algorithm to optimize the scheduling strategy of the batch is returned, and a second result scheduling sequence of the resource scheduling tasks of the batch is obtained;
and selecting the scheduling strategy of the resource scheduling task of the current batch with the smaller value of the evaluation function in the first result scheduling sequence and the second result scheduling sequence.
Optionally, the method further includes:
and when the number of the resource scheduling tasks is smaller than the number of the schedulable computing nodes, selecting the schedulable computing node with smaller node load to distribute the resource scheduling tasks according to the node load condition as the sequence of each schedulable computing node.
To solve the above technical problem, the present application further provides a device for dynamically scheduling resources, including:
the batching unit is used for batching the resource scheduling tasks when the number of the resource scheduling tasks of the user is larger than the number of the adjustable computing nodes, so that the number of the resource scheduling tasks of each batch is not larger than the number of the adjustable computing nodes;
an optimizing unit, configured to obtain a scheduling policy of a first batch by applying a simulated annealing algorithm to optimize, starting from a first batch of the resource scheduling tasks, a scheduling objective that the resource scheduling tasks of the first batch are put into the schedulable computing nodes to achieve a lowest global resource utilization rate for each schedulable computing node of the first batch after being scheduled, until the scheduling policy of each batch is obtained;
the dispatching unit is used for executing the dispatching of each resource dispatching task according to the dispatching strategy of each batch;
in one scheduling policy, each schedulable computing node allocates at most one resource scheduling task.
To solve the above technical problem, the present application further provides a device for dynamically scheduling resources, including:
a memory for storing a computer program;
a processor for executing the computer program, wherein the computer program, when executed by the processor, implements the steps of the method for dynamically scheduling resources as described in any one of the above.
To solve the above technical problem, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for dynamically scheduling resources according to any one of the above items.
The resource dynamic scheduling method provided by the application batches resource scheduling tasks according to the number of the adjustable computing nodes when the number of the resource scheduling tasks of a user is larger than the number of the adjustable computing nodes, reduces the complexity of many-to-many scheduling in the traditional scheduling, further utilizes a simulated annealing algorithm to put the batch of resource scheduling tasks into the schedulable computing nodes for scheduling to achieve the aim of minimizing the global resource utilization rate of each adjustable computing node of the batch as the optimization, performs global optimization on each batch of resource scheduling tasks, ensures the randomness and the global property of the optimization and obtains the optimal solution of load balance at the same time, not only realizes the global optimization of the overall scheduling of a cluster, but also reduces the scheduling difficulty, improves the scheduling efficiency, solves the problem of load imbalance in the current cloud computing, and can solve the problem of multi-objective optimization in the scheduling by configuring the evaluation mode of the optimization aim, a general and effective method is provided for the resource scheduling problem of the cloud computing environment, and the method is suitable for the automatic management requirement of the current cloud computing cluster.
The present application further provides a device, an apparatus, and a computer-readable storage medium for dynamically scheduling resources, which have the above beneficial effects, and are not described herein again.
Drawings
For a clearer explanation of the embodiments or technical solutions of the prior art of the present application, the drawings needed for the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a method for dynamically scheduling resources according to an embodiment of the present application;
FIG. 2 is a flowchart of a simulated annealing algorithm optimization provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a dynamic resource scheduling apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a device for dynamically scheduling resources according to an embodiment of the present application.
Detailed Description
The core of the application is to provide a method, a device, equipment and a computer readable storage medium for dynamically scheduling resources, so that global optimization of overall scheduling of a cluster is realized, and scheduling efficiency is improved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example one
Fig. 1 is a flowchart of a method for dynamically scheduling resources according to an embodiment of the present application.
As shown in fig. 1, a method for dynamically scheduling resources provided in the embodiment of the present application includes:
s101: and when the number of the resource scheduling tasks of the user is greater than the number of the adjustable calculation nodes, the resource scheduling tasks are batched, so that the number of the resource scheduling tasks of each batch is not greater than the number of the adjustable calculation nodes.
S102: and (3) from the first batch of resource scheduling tasks, optimizing to obtain the scheduling strategy of the batch by applying a simulated annealing algorithm until the scheduling strategy of each batch is obtained, wherein the optimization target is that the global resource utilization rate of each schedulable computing node of the batch is the lowest after the resource scheduling tasks of the batch are put into the schedulable computing nodes for scheduling.
S103: and executing the dispatching of each resource dispatching task according to the dispatching strategy of each batch.
In one scheduling strategy, each schedulable computing node is allocated with at most one resource scheduling task.
In specific implementation, the resource dynamic scheduling method provided by the embodiment of the present application may be executed based on a management node server in a server cluster corresponding to a cloud computing platform.
The number of resource scheduling tasks is assumed to be M, and the number of adjustable computing nodes in the server cluster is assumed to be n. And when M is larger than n, batching the resource scheduling tasks, so that the number of the resource scheduling tasks of each batch is not larger than the number of the adjustable computation nodes, specifically, the number of the resource scheduling tasks of each batch is M/n, the remainder obtained by dividing the number of the resource scheduling tasks by the number of the adjustable computation nodes is the number of the resource scheduling tasks of the last batch, and the number of the resource scheduling tasks of the rest batches is equal to the number of the adjustable computation nodes.
Further, in order to adapt to different resource scheduling tasks with different priorities and different resource scheduling tasks with different resource demand conditions, and facilitate obtaining a more reasonable global optimal solution, the resource scheduling tasks are batched in step S101, so that the number of the resource scheduling tasks in each batch is not greater than the number of the adjustable computation nodes, which may specifically include:
analyzing to obtain the dependency relationship among the resource scheduling tasks according to the parameter information of the resource scheduling tasks, and sequencing the resource scheduling tasks according to the priority;
and according to the sequencing of the resource scheduling tasks, simultaneously ensuring that the resource scheduling tasks with the dependency relationship are distributed to different batches and the number of the resource scheduling tasks of each batch is not more than the number of the adjustable calculation nodes, and batching the resource scheduling tasks.
In practical application, a scoring function built in the K8s cluster may be used to score each resource scheduling task, perform priority ordering of the resource scheduling tasks according to the scores, and obtain resource requirement information of each resource scheduling task, which may specifically include requirements of a CPU, a memory, an IO, a bandwidth, affinity, and the like. In addition, other evaluation manners may also be adopted to perform priority ordering on the resource scheduling tasks, for example, multiple evaluation conditions are set for weighted comprehensive evaluation.
According to the task condition needing to be dispatched at present, parameter information of each resource to be dispatched is collected, the dependency relationship among different resources is analyzed, and the resources with the dependency relationship are distributed to different batches for batch dispatching.
In addition, the method for dynamically scheduling resources provided by the embodiment of the present application may further include: and when the number of the resource scheduling tasks is smaller than the number of the schedulable computing nodes, selecting the schedulable computing nodes with smaller node loads to distribute the resource scheduling tasks according to the sequencing of the schedulable computing nodes under the node load condition. When M is less than n, the rank of the first M adjustable computing nodes can be selected, and the first M adjustable computing nodes can be specifically scheduled by using an intelligent algorithm. Specifically, a scoring function built in the K8s cluster may also be used to score and sort the adjustable computation nodes according to the node load conditions. In addition, other evaluation modes can be adopted to sequence the adjustable computing nodes, for example, weighting comprehensive evaluation can be performed from the perspective of various types of resources such as CPU, memory, IO, bandwidth, affinity, and the like, when the weight of each type of resource is set, the weight can be further set according to the quality condition of the specific type of resource of the adjustable computing node, and when the quality of a certain type of resource of the adjustable computing node is poor, the weight of the type of resource corresponding to the adjustable computing node is increased so that the resource scheduling task is inclined to other adjustable computing nodes.
Furthermore, when the number of the resource scheduling tasks is equal to the number of the adjustable computing nodes, the resource dynamic scheduling method provided by the embodiment of the application can be directly used to skip the batching step, and the optimal scheduling can be directly performed on all the resource scheduling tasks. .
According to the resource dynamic scheduling method provided by the embodiment of the application, when the number of the resource scheduling tasks of the user is larger than the number of the adjustable calculation nodes, the resource scheduling tasks are batched according to the number of the adjustable calculation nodes, the complexity of many-to-many scheduling in the traditional scheduling is reduced, and the complexity O (M) of the traditional scheduling is reduced to O (N), wherein M is the number of the resource scheduling tasks, and N is the batch number of the scheduling process; and then utilizing a simulated annealing algorithm to achieve the aim of minimizing the global resource utilization rate of each schedulable computing node of the batch after the resource scheduling tasks of the batch are put into the schedulable computing nodes for scheduling, performing global optimization on each batch of resource scheduling tasks, ensuring the randomness and the global property of the optimization and simultaneously obtaining the optimal solution of load balance, thereby not only realizing the global optimization of the overall scheduling of the cluster, but also reducing the scheduling difficulty, improving the scheduling efficiency, solving the problem of load imbalance in the current cloud computing, solving the multi-objective optimization problem in the scheduling by configuring the evaluation mode of the optimization objective on the basis, providing a general and effective method for the resource scheduling problem of the cloud computing environment, and being suitable for the automatic management requirement of the current cloud computing cluster.
Example two
Fig. 2 is a flowchart of optimization of a simulated annealing algorithm according to an embodiment of the present application.
On the basis of the above embodiment, the resource scheduling task of the batch is put into the schedulable computing nodes to achieve the goal of optimizing that the global resource utilization rate of each schedulable computing node of the batch is the lowest, specifically, the resource types can be weighted and averaged to obtain the global resource utilization rate, the included resource types can include but are not limited to a CPU, a memory, an IO, a bandwidth, affinity and the like, and other indexes can also be included, different evaluation function items can be added and different weights can be set for different performance concerns according to the actual situation of the cloud computing platform, and the multi-objective optimization problem in the scheduling process can be flexibly adapted.
Let the set of scalable compute nodes N ═ N1,N2,…,Ni,…,NnAnd f, the resource scheduling task set T of the batch is { T ═ T1,T2,…,Tj,…,TmAnd f, wherein m is less than or equal to n, and m is less than or equal to n in the previous batches in order to accelerate optimization until the last batch is less than n resource scheduling tasks.
In specific implementation, the embodiment of the present application takes selecting CPU, memory, IO, affinity, and resource utilization rationality as multiple targets for explanation, and corresponds to the following evaluation functions f(s):
F(S)=a·FCPU+b·FMem+c·FIO+d·FAff+e·FCons
wherein, FCPUIs the CPU load condition evaluation function corresponding to the scheduling sequence, a is the weight of the CPU load condition evaluation function, FCPUThe expression of (a) is as follows:
Figure BDA0003571022140000091
where n is the number of adjustable computation nodes, ci1CPU resources occupied by resource scheduling tasks allocated to the ith schedulable computing node under the scheduling sequence, ci0CPU resources used for the ith schedulable compute node, CitThe total amount of CPU resources of the node is calculated for the ith schedulable degree.
FMemIs a memory load condition evaluation function corresponding to the scheduling sequence, b is the weight of the memory load condition evaluation function, FMemThe expression of (a) is as follows:
Figure BDA0003571022140000092
wherein n is the number of the adjustable computation nodes, mi1Memory resource, m, occupied by resource scheduling task allocated to ith schedulable compute node under scheduling sequencei0Memory resources used by the ith scalability compute node, MitAnd calculating the total amount of the memory resources of the node for the ith adjustability.
FIOFor the IO load evaluation function corresponding to the scheduling sequence, c is the weight of the IO load evaluation function, FIOThe expression of (a) is as follows:
Figure BDA0003571022140000101
where n is the number of adjustable computation nodes, Ii1IO resources occupied by resource scheduling tasks allocated to the ith schedulable compute node under a scheduling sequence, Ii0IO resources used for the ith schedulable compute node, IitAnd calculating the total IO resource amount of the node for the ith adjustable degree.
FAffAn affinity matching condition evaluation function between the resource scheduling task corresponding to the scheduling sequence and the schedulable computing node is given, d is the weight of the affinity evaluation function, FAffThe expression of (a) is as follows:
Figure BDA0003571022140000102
where n is the number of adjustable computation nodes, f1(Li1==Li0) Affinity match case function for resource scheduling task assigned to ith schedulable computing node under scheduling sequence, Li1Affinity, L, of resource scheduling tasks allocated for the ith schedulable computing node under a scheduling sequencei0Calculate node affinity for the ith scalability when Li0Satisfy Li1When affinity is f1(Li1==Li0) When L is 0i0Does not satisfy Li1When affinity f1(Li1==Li0)=1。
FConsA resource utilization rationality evaluation function corresponding to the scheduling sequence, e is a weight of the resource utilization rationality function, FConsThe expression of (a) is as follows:
Figure BDA0003571022140000103
where n is the number of adjustable computation nodes, f2(Xi,X0) For the resource utilization rationality function of the ith scaleable computation node under the scheduling sequence, Xi1Resource occupancy, X, for resource scheduling tasks allocated to the ith schedulable computing node under a scheduling sequencei0Calculating the total amount of resources of the node for the ith schedulable degree, when Xi1<Xi0Time f2(Xi,X0) When X is 0i1>Xi0Time f2(Xi,X0)=1。
Then, the objective function optimized by applying the simulated annealing algorithm in the embodiment of the present application is as follows:
Q(S)=min{F(S)}。
based on this, the initial temperature of the simulated annealing algorithm is recorded as T0The termination temperature is TeThe maximum number of iterations is L. Randomly generating an initial scheduling sequence S0={N01,N02,…,N0j,…,N0mI.e. the jth resource scheduling task of this batch is allocated to the Nth0jOn each adjustable calculation node, N0j∈N={N1,N2,…,Ni,…,Nn}. r is a simulated annealing coefficient, is a real number between 0 and 1 and is used for controlling the annealing rate, and the smaller the value of r is, the faster the simulated annealing rate is.
In the operation, the iteration times can be controlled only by adopting the simulated annealing coefficient and the termination temperature, or the iteration times can be controlled by adopting the maximum iteration times at the same time, namely, the iteration is stopped until the maximum iteration times and/or the termination temperature is reached, and the corresponding scheduling sequence is output as the sub-optimal result scheduling sequence.
On this basis, as shown in fig. 2, the optimizing by applying the simulated annealing algorithm in step S101 to obtain the scheduling policy of the lot may specifically include:
s201: an initial scheduling sequence is generated.
S202: in the iteration calculation, a preset disturbance method is used for disturbing a previous scheduling sequence corresponding to the previous iteration times to obtain a current scheduling sequence corresponding to the current iteration times.
Specifically, the preset perturbation method may include the following methods: (1) exchanging the adjustable computing nodes corresponding to any two resource scheduling tasks; (2) randomly selecting to reverse the sequence of the adjustable computing nodes between the adjustable computing nodes corresponding to the two resource scheduling tasks; (3) and randomly selecting an adjustability calculation node a, an adjustability calculation node b and an adjustability calculation node c corresponding to the three resource scheduling tasks, and shifting the adjustability calculation node between the adjustability calculation node a and the adjustability calculation node b to the position behind the adjustability calculation node c. In each iterative calculation, one of the above manners may be randomly selected to generate a new scheduling sequence, or other methods may be adopted to generate a new scheduling sequence.
S203: judging whether the value of the evaluation function corresponding to the current scheduling sequence is smaller than the value of the evaluation function corresponding to the last scheduling sequence; if yes, go to step S204; if not, the process proceeds to step S205.
That is, if Δ ═ F (S)h)-F(Sh-1)<0, then Sh-1Is updated to Sh,Sh-1For the last scheduling sequence, F (S)h-1) For the last scheduling sequence corresponding to the value of the merit function, ShFor the current scheduling sequence, F (S)h) And the value of the corresponding evaluation function of the current scheduling sequence.
S204: and taking the current iteration sequence as the last scheduling sequence corresponding to the next iteration time.
S205: metropolis was judged.
Specifically, if Δ ≧ Δ0, judging Metropolis, and generating a random number r between 0 and 10Random (0,1), order
Figure BDA0003571022140000111
Wherein Me is a judgment result of Metropolis judgment, and T is the temperature corresponding to the current iteration number.
If the current scheduling sequence is judged by Metropolis compared with the last scheduling sequence, namely Me>r0Then, go to step S204; if the current scheduling sequence is not judged by Metropolis compared with the last scheduling sequence, namely Me is less than or equal to r0Then, the process proceeds to step S206.
S206: the previous scheduling sequence is the previous scheduling sequence corresponding to the next iteration number.
That is, when the value of the evaluation function corresponding to the current scheduling sequence is not less than the value of the evaluation function corresponding to the previous scheduling sequence and is not judged by Metropolis, the current scheduling sequence is not accepted, and iteration is performed in the next iteration or in the previous scheduling sequence.
S207: judging whether the current temperature corresponding to the current iteration times is greater than the termination temperature of the simulated annealing algorithm or not; if yes, go to step S208; if not, the flow proceeds to step S209.
S208: after multiplying the current temperature by the annealing coefficient to obtain the temperature of the next iteration number, the process returns to step S202.
S209: stopping iterative computation and outputting a result scheduling sequence.
Specifically, whether the current temperature T is greater than the termination temperature T of the simulated annealing algorithm or not is judgedeIf the temperature is larger than the preset temperature, updating the temperature, enabling the temperature T to be r multiplied by T, entering next iterative calculation, and otherwise, ending the simulated annealing algorithm to obtain a result scheduling sequence.
The step shown in fig. 2 is a method for scheduling the resource scheduling task of the current batch, and after obtaining the optimization result (i.e., the result scheduling sequence) of the resource scheduling task of the current batch, the result scheduling sequence may be directly put into actual scheduling as the scheduling policy of the resource scheduling task of the current batch, and the resource states of all the schedulable computing nodes are updated; and the actual scheduling can be uniformly performed after the scheduling strategies of all batch resource scheduling tasks are obtained. If the non-scheduled resource scheduling tasks exist, the step of optimizing the scheduling strategy of the batch by using the simulated annealing algorithm is repeated until the scheduling strategy of each batch is obtained.
EXAMPLE III
On the basis of the above embodiment, in order to ensure the reasonability of resource scheduling and further avoid scheduling failure, in step S209: after stopping iterative computation, the method for dynamically scheduling resources provided in the embodiment of the present application further includes:
if the resource occupation condition does not exceed the total resource amount of the adjustable computing nodes after the corresponding resource scheduling tasks are distributed to the adjustable computing nodes in the result scheduling sequence corresponding to the last iteration, taking the result scheduling sequence as the scheduling strategy of the resource scheduling tasks of the batch;
if the resource occupation situation exceeds the total resource quantity of the schedulable computing nodes after at least one schedulable computing node is allocated with the corresponding resource scheduling task in the result scheduling sequence, returning to the step of batching the resource scheduling tasks in the step S101 after the quantity of the schedulable computing nodes is reduced until a first result scheduling sequence of the resource scheduling tasks of the re-allocated batch is obtained; simultaneously, starting preemption scheduling, exchanging a high-priority task in the resource scheduling tasks of the batch with a low-priority task which is operated at the schedulable computing node, and returning to the step of obtaining the scheduling strategy of the batch by applying the simulated annealing algorithm optimization in the step S102 to obtain a second result scheduling sequence of the resource scheduling tasks of the batch;
and selecting the scheduling strategy of the resource scheduling task of the batch with the smaller evaluation function value in the first result scheduling sequence and the second result scheduling sequence.
In specific implementation, when the result dispatching sequence of the current batch is obtained, the matching conditions of all the dispatchable computing nodes and the resource dispatching tasks are further evaluated, if reasonable, actual dispatching is carried out, and the next batch is dispatched (if existing). Otherwise, the dispatching of the current batch fails, and the situation that the batch is not understood is possible. And in a reasonable judgment mode, namely a result scheduling sequence, ensuring that the resource occupation condition does not exceed the total resource amount of the adjustable computing nodes after the corresponding resource scheduling tasks are distributed to each adjustable computing node, otherwise, failing to schedule.
If the scheduling fails, the two operations of reducing the number of the adjustable calculation nodes, re-batching (for example, reducing one adjustable calculation node) and preempting and scheduling are carried out simultaneously to respectively obtain a first result scheduling sequence (corresponding to the evaluation function value F)n-1(S)), a second resulting scheduling sequence (corresponding to the evaluation function value F)pre(S)), selecting a result scheduling sequence with a smaller evaluation function value to be put into actual scheduling.
It should be noted that, during preemptive scheduling, one or more high-priority tasks may be randomly selected from the resource scheduling tasks of the current batch, and the high-priority tasks may be exchanged with the low-priority tasks that have been run at the schedulable computing node, and then simulated scheduling may be performed. When the two operations are executed, the rationality of the first result scheduling sequence and the second result scheduling sequence is ensured, otherwise, batch is considered again, or optimization is performed by using a simulated annealing algorithm or preemptive scheduling is performed until a globally optimal and reasonable solution is obtained.
On the basis of the detailed embodiments corresponding to the resource dynamic scheduling method, the application also discloses a resource dynamic scheduling device, equipment and a computer readable storage medium corresponding to the method.
Example four
Fig. 3 is a schematic structural diagram of a device for dynamically scheduling resources according to an embodiment of the present application.
As shown in fig. 3, the apparatus for dynamically scheduling resources provided in the embodiment of the present application includes:
a batching unit 301, configured to, when the number of resource scheduling tasks of a user is greater than the number of adjustable computation nodes, batch the resource scheduling tasks so that the number of resource scheduling tasks of each batch is not greater than the number of adjustable computation nodes;
an optimizing unit 302, configured to obtain a scheduling policy of a first batch by applying a simulated annealing algorithm to optimize, starting from the first batch of resource scheduling tasks, the scheduling tasks of the first batch to reach a minimum global resource utilization rate of each schedulable computing node of the first batch after being placed into the schedulable computing node for scheduling, until the scheduling policy of each batch is obtained;
a scheduling unit 303, configured to perform scheduling of each resource scheduling task according to a scheduling policy of each batch;
in one scheduling strategy, each schedulable computing node is allocated with at most one resource scheduling task.
Further, the device for dynamically scheduling resources provided in the embodiment of the present application further includes:
and the sequencing unit is used for selecting the schedulable computing node with smaller node load to distribute the resource scheduling task for sequencing each schedulable computing node according to the node load condition when the number of the resource scheduling tasks is less than the number of the schedulable computing nodes.
Since the embodiments of the apparatus portion and the method portion correspond to each other, please refer to the description of the embodiments of the method portion for the embodiments of the apparatus portion, which is not repeated here.
EXAMPLE five
Fig. 4 is a schematic structural diagram of a device for dynamically scheduling resources according to an embodiment of the present application.
As shown in fig. 4, the device for dynamically scheduling resources provided in the embodiment of the present application includes:
a memory 410 for storing a computer program 411;
a processor 420 for executing a computer program 411, wherein the computer program 411 when executed by the processor 420 implements the steps of the method for dynamically scheduling resources according to any of the embodiments described above.
Processor 420 may include one or more processing cores, such as a 3-core processor, an 8-core processor, and so forth. The processor 420 may be implemented in at least one hardware form of a Digital Signal Processing (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), or a Programmable Logic Array (PLA). Processor 420 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a central Processing unit (cpu); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 420 may be integrated with an image processor GPU (graphics Processing unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 420 may also include an Artificial Intelligence (AI) (artificial intelligence) processor for processing computational operations related to machine learning.
Memory 410 may include one or more computer-readable storage media, which may be non-transitory. Memory 410 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 410 is at least used for storing the following computer program 411, wherein after the computer program 411 is loaded and executed by the processor 420, the relevant steps in the resource dynamic scheduling method disclosed in any of the foregoing embodiments can be implemented. In addition, the resources stored by the memory 410 may also include an operating system 412, data 413, and the like, and the storage may be transient storage or permanent storage. Operating system 412 may be Windows, among others. The data 413 may include, but is not limited to, data involved in the above-described methods.
In some embodiments, the dynamic resource scheduling device may further include a display 430, a power supply 440, a communication interface 450, an input/output interface 460, a sensor 470, and a communication bus 480.
Those skilled in the art will appreciate that the architecture shown in fig. 4 does not constitute a limitation of the resource dynamic scheduling apparatus and may include more or fewer components than those shown.
The resource dynamic scheduling device provided by the embodiment of the application comprises a memory and a processor, and when the processor executes a program stored in the memory, the resource dynamic scheduling method can be realized, and the effect is the same as that of the resource dynamic scheduling method.
EXAMPLE six
It should be noted that the above-described embodiments of the apparatus and device are merely illustrative, for example, the division of modules is only one division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or modules, and may be in an electrical, mechanical or other form. Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and executes all or part of the steps of the methods described in the embodiments of the present application, or all or part of the technical solutions.
To this end, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the resource dynamic scheduling method.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory ROM (Read-Only Memory), a random Access Memory ram (random Access Memory), a magnetic disk, or an optical disk.
The computer program contained in the computer-readable storage medium provided in this embodiment can implement the steps of the resource dynamic scheduling method described above when being executed by a processor, and the effect is the same as above.
The foregoing describes a method, an apparatus, a device, and a computer-readable storage medium for dynamically scheduling resources provided in the present application in detail. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device, the apparatus and the computer-readable storage medium disclosed in the embodiments correspond to the method disclosed in the embodiments, so that the description is simple, and the relevant points can be referred to the description of the method. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method for dynamically scheduling resources, comprising:
when the number of resource scheduling tasks of a user is larger than the number of the adjustable computation nodes, the resource scheduling tasks are batched, so that the number of the resource scheduling tasks of each batch is not larger than the number of the adjustable computation nodes;
starting from the first batch of the resource scheduling tasks, applying a simulated annealing algorithm to optimize to obtain a scheduling strategy of the batch by taking the goal of achieving the lowest global resource utilization rate of each schedulable computing node of the batch after the resource scheduling tasks of the batch are put into the schedulable computing nodes for scheduling until the scheduling strategy of each batch is obtained;
executing the dispatching of each resource dispatching task according to the dispatching strategy of each batch;
in one scheduling policy, each schedulable computing node allocates at most one resource scheduling task.
2. The method according to claim 1, wherein the batching the resource scheduling tasks such that the number of the resource scheduling tasks of each batch is not greater than the number of the schedulable computing nodes comprises:
analyzing and obtaining the dependency relationship among the resource scheduling tasks according to the parameter information of the resource scheduling tasks, and sequencing the resource scheduling tasks according to the priority;
and according to the sequencing of each resource scheduling task, simultaneously ensuring that the resource scheduling tasks with the dependency relationship are distributed to different batches and the quantity of the resource scheduling tasks of each batch is not more than the quantity of the adjustable computing nodes, and batching each resource scheduling task.
3. The method according to claim 1, wherein the resource scheduling task is batched so that the number of the resource scheduling tasks in each batch is not greater than the number of the schedulable computing nodes, and specifically:
and taking the remainder obtained by dividing the number of the resource scheduling tasks by the number of the adjustable computing nodes as the number of the resource scheduling tasks of the last batch, wherein the number of the resource scheduling tasks of the rest batches is equal to the number of the adjustable computing nodes.
4. The method according to claim 1, wherein the objective function of the simulated annealing algorithm is:
Q(S)=min{F(S)};
wherein the content of the first and second substances,
F(S)=a·FCPU+b·FMem+c·FIO+d·FAff+e·FCons
Figure FDA0003571022130000021
Figure FDA0003571022130000022
Figure FDA0003571022130000023
Figure FDA0003571022130000024
Figure FDA0003571022130000025
q (S) is the objective function, F (S) is the evaluation function of the scheduling sequence generated in the simulated annealing algorithm,FCPUis the CPU load condition evaluation function corresponding to the scheduling sequence, a is the weight of the CPU load condition evaluation function, FMemIs the memory load condition evaluation function corresponding to the scheduling sequence, b is the weight of the memory load condition evaluation function, FIOIs the IO load evaluation function corresponding to the scheduling sequence, c is the weight of the IO load evaluation function, FAffAn affinity matching condition evaluation function between the resource scheduling task corresponding to the scheduling sequence and the adjustable computing node, d is the weight of the affinity evaluation function, FConsE is a resource utilization rationality evaluation function corresponding to the scheduling sequence, and e is the weight of the resource utilization rationality function;
n is the number of adjustable computing nodes, ci1CPU resources occupied by the resource scheduling task assigned to the ith schedulable computing node under the scheduling sequence, ci0CPU resources used for the ith of said scaleable compute node, CitThe total amount of CPU resources of the ith schedulable computing node;
mi1memory resources, m, occupied by the resource scheduling task allocated to the ith schedulable computing node under the scheduling sequencei0For the ith memory resource, M, used by the scaleable compute nodeitThe total amount of memory resources of the ith schedulable computing node;
Ii1IO resources occupied by the resource scheduling task allocated to the ith schedulable computing node under the scheduling sequence, Ii0IO resources already used for the ith of said scalable compute node, IitThe total IO resource amount of the ith schedulable computing node;
f1(Li1==Li0) An affinity match condition function, L, for said resource scheduling task assigned to said schedulable computing node at said scheduling sequencei1Affinity, L, of the resource scheduling task assigned to the ith said schedulable computing node under said scheduling sequencei0Computing an affinity of a node for the ith said schedulable,when L isi0Satisfy Li1When affinity is f1(Li1==Li0) When L is 0i0Does not satisfy Li1When affinity is f1(Li1==Li0)=1;
f2(Xi,X0) A resource utilization rationality function, X, for the ith said adjustable computation node under said scheduling sequencei1Resource occupancy, X, of the resource scheduling task assigned to the ith said adjustable computation node under the scheduling sequencei0Calculating the total amount of resources of the node for the ith said adjustability, when Xi1<Xi0Time f2(Xi,X0) When X is 0i1>Xi0Time f2(Xi,X0)=1。
5. The method according to claim 4, wherein the optimizing the scheduling policy of the lot by applying the simulated annealing algorithm specifically comprises:
starting from the initial scheduling sequence, in each iteration calculation, disturbing a previous scheduling sequence corresponding to the previous iteration times by using a preset disturbance method to obtain a current scheduling sequence corresponding to the current iteration times;
if the value of the evaluation function corresponding to the current scheduling sequence is smaller than the value of the evaluation function corresponding to the last scheduling sequence, taking the current iteration sequence as the last scheduling sequence corresponding to the next iteration number;
if the value of the evaluation function corresponding to the current scheduling sequence is larger than or equal to the value of the evaluation function corresponding to the last scheduling sequence, carrying out Metropolis judgment;
if the current scheduling sequence is judged by the Metropolis compared with the last scheduling sequence, taking the current iteration sequence as the last scheduling sequence corresponding to the next iteration times;
if the current scheduling sequence is not judged by the Metropolis compared with the previous scheduling sequence, taking the previous scheduling sequence as the previous scheduling sequence corresponding to the next iteration number;
if the current temperature corresponding to the current iteration times is greater than the termination temperature of the simulated annealing algorithm, multiplying the current temperature by an annealing coefficient to obtain the temperature of the next iteration times, and returning to the step of disturbing the last scheduling sequence corresponding to the last iteration times by using a preset disturbance method to obtain the current scheduling sequence corresponding to the current iteration times;
and if the current temperature corresponding to the current iteration times is less than or equal to the termination temperature, stopping iterative computation.
6. The method of claim 5, further comprising, after stopping the iterative computation:
if the resource occupation condition does not exceed the total resource amount of the schedulable computing nodes after the corresponding resource scheduling tasks are distributed to each schedulable computing node in the result scheduling sequence corresponding to the last iteration, taking the result scheduling sequence as the scheduling strategy of the resource scheduling tasks of the batch;
if the resource occupation situation exceeds the total resource quantity of the schedulable computing nodes after at least one schedulable computing node is allocated with the corresponding resource scheduling task in the result scheduling sequence, returning to the step of batching the resource scheduling tasks after reducing the quantity of the schedulable computing nodes until obtaining a first result scheduling sequence of the reallocated resource scheduling tasks of the batch; simultaneously, preemptive scheduling is started, a high-priority task in the resource scheduling tasks of the batch is exchanged with a low-priority task which runs at the schedulable computing node, the step of applying the simulated annealing algorithm to optimize the scheduling strategy of the batch is returned, and a second result scheduling sequence of the resource scheduling tasks of the batch is obtained;
and selecting the scheduling strategy of the resource scheduling task of the current batch with the smaller value of the evaluation function in the first result scheduling sequence and the second result scheduling sequence.
7. The method for dynamically scheduling resources according to claim 1, further comprising:
and when the number of the resource scheduling tasks is smaller than the number of the schedulable computing nodes, selecting the schedulable computing node with smaller node load to distribute the resource scheduling tasks according to the node load condition as the sequence of each schedulable computing node.
8. A device for dynamically scheduling resources, comprising:
the batching unit is used for batching the resource scheduling tasks when the number of the resource scheduling tasks of the user is larger than the number of the adjustable computing nodes, so that the number of the resource scheduling tasks of each batch is not larger than the number of the adjustable computing nodes;
an optimizing unit, configured to obtain a scheduling policy of a first batch by applying a simulated annealing algorithm to optimize, starting from a first batch of the resource scheduling tasks, a scheduling objective that the resource scheduling tasks of the first batch are put into the schedulable computing nodes to achieve a lowest global resource utilization rate for each schedulable computing node of the first batch after being scheduled, until the scheduling policy of each batch is obtained;
the dispatching unit is used for executing the dispatching of each resource dispatching task according to the dispatching strategy of each batch;
in one scheduling strategy, each schedulable computing node is allocated with at most one resource scheduling task.
9. A device for dynamically scheduling resources, comprising:
a memory for storing a computer program;
processor for executing the computer program, when executed by the processor, implementing the steps of the method for dynamically scheduling resources according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for dynamically scheduling resources according to any one of claims 1 to 7.
CN202210319224.3A 2022-03-29 2022-03-29 Resource dynamic scheduling method, device, equipment and computer readable storage medium Pending CN114675953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210319224.3A CN114675953A (en) 2022-03-29 2022-03-29 Resource dynamic scheduling method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210319224.3A CN114675953A (en) 2022-03-29 2022-03-29 Resource dynamic scheduling method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114675953A true CN114675953A (en) 2022-06-28

Family

ID=82077065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210319224.3A Pending CN114675953A (en) 2022-03-29 2022-03-29 Resource dynamic scheduling method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114675953A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242662A (en) * 2022-09-22 2022-10-25 音信云(武汉)信息技术有限公司 Data resource allocation method and device based on cloud computing
CN117857465A (en) * 2024-03-07 2024-04-09 腾讯科技(深圳)有限公司 Data processing method, device, equipment, storage medium and program product

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115242662A (en) * 2022-09-22 2022-10-25 音信云(武汉)信息技术有限公司 Data resource allocation method and device based on cloud computing
CN115242662B (en) * 2022-09-22 2023-02-17 音信云(武汉)信息技术有限公司 Data resource allocation method and device based on cloud computing
CN117857465A (en) * 2024-03-07 2024-04-09 腾讯科技(深圳)有限公司 Data processing method, device, equipment, storage medium and program product
CN117857465B (en) * 2024-03-07 2024-05-14 腾讯科技(深圳)有限公司 Data processing method, device, equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
Mapetu et al. Low-time complexity and low-cost binary particle swarm optimization algorithm for task scheduling and load balancing in cloud computing
Kaur et al. Load balancing optimization based on hybrid Heuristic-Metaheuristic techniques in cloud environment
Ding et al. Q-learning based dynamic task scheduling for energy-efficient cloud computing
Elmougy et al. A novel hybrid of Shortest job first and round Robin with dynamic variable quantum time task scheduling technique
Masdari et al. A survey of PSO-based scheduling algorithms in cloud computing
Polo et al. Performance-driven task co-scheduling for mapreduce environments
Alkayal et al. Efficient task scheduling multi-objective particle swarm optimization in cloud computing
Rekha et al. Efficient task allocation approach using genetic algorithm for cloud environment
Zhu et al. Scheduling stochastic multi-stage jobs to elastic hybrid cloud resources
CN110737529A (en) cluster scheduling adaptive configuration method for short-time multiple variable-size data jobs
Patel et al. Priority based job scheduling techniques in cloud computing: a systematic review
Asghari et al. Online scheduling of dependent tasks of cloud’s workflows to enhance resource utilization and reduce the makespan using multiple reinforcement learning-based agents
Aladwani Types of task scheduling algorithms in cloud computing environment
US11816509B2 (en) Workload placement for virtual GPU enabled systems
CN114675953A (en) Resource dynamic scheduling method, device, equipment and computer readable storage medium
Mohammadzadeh et al. Scientific workflow scheduling in multi-cloud computing using a hybrid multi-objective optimization algorithm
Malik et al. Comparison of task scheduling algorithms in cloud environment
Dhinesh Babu et al. A decision-based pre-emptive fair scheduling strategy to process cloud computing work-flows for sustainable enterprise management
Roy et al. Development and analysis of a three phase cloudlet allocation algorithm
Li et al. An effective scheduling strategy based on hypergraph partition in geographically distributed datacenters
Deng et al. A data and task co-scheduling algorithm for scientific cloud workflows
Zhou et al. Concurrent workflow budget-and deadline-constrained scheduling in heterogeneous distributed environments
Peng et al. A reinforcement learning-based mixed job scheduler scheme for cloud computing under SLA constraint
Kaur et al. Deadline constrained scheduling of scientific workflows on cloud using hybrid genetic algorithm
Amalarethinam et al. Customer facilitated cost-based scheduling (CFCSC) in cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination