CN115509715A - Distributed task scheduling method and device and electronic equipment - Google Patents

Distributed task scheduling method and device and electronic equipment Download PDF

Info

Publication number
CN115509715A
CN115509715A CN202211241630.9A CN202211241630A CN115509715A CN 115509715 A CN115509715 A CN 115509715A CN 202211241630 A CN202211241630 A CN 202211241630A CN 115509715 A CN115509715 A CN 115509715A
Authority
CN
China
Prior art keywords
particle
current
target
task
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211241630.9A
Other languages
Chinese (zh)
Inventor
陈超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202211241630.9A priority Critical patent/CN115509715A/en
Publication of CN115509715A publication Critical patent/CN115509715A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Abstract

The invention discloses a distributed task scheduling method, a distributed task scheduling device and electronic equipment. The method comprises the following steps: acquiring the number of tasks to be scheduled currently and the total number of node servers corresponding to the distributed node servers; based on the number of tasks, constructing a task scheduling vector to be optimized; based on a discrete particle swarm optimization mode, a task resource constraint condition and the total number of node servers, taking the maximum load balance as an optimization target, and taking a task scheduling vector as a particle position to carry out global position iterative optimization to obtain an iterated target global optimal position; and determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position. By the technical scheme, the problem of discretized task scheduling optimization can be solved, and the task scheduling accuracy is guaranteed.

Description

Distributed task scheduling method and device and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a distributed task scheduling method and device and electronic equipment.
Background
In recent years, with the increasing maturity of cloud computing technology, more and more application tasks are deployed to node servers in various ways. In order to fully utilize the computing resources of the node servers and save the computing cost, the application tasks need to be reasonably scheduled, so that the load of the node servers is balanced.
At present, a distributed system in an open network environment often adopts a loosely-coupled Service-Oriented Architecture (SOA) Architecture to divide a complex application task into different services according to functional characteristics, and the services are software entities with coarse granularity and can be called, and have high availability, extensibility and reusability. In the node server unit, with the continuous increase of system services and users, the number of system tasks is increased, and the distributed task processing architecture can improve the concurrency, the expandability and the fault tolerance so as to realize the scheduling of continuous tasks. However, the existing scheduling mode can only realize the scheduling of the continuous tasks; when the discretization task is scheduled, the problem of resource waste caused by uneven load is also generated. Therefore, the invention provides a reasonable scheduling mode aiming at the discretization task.
Disclosure of Invention
The invention provides a distributed task scheduling method, a distributed task scheduling device and electronic equipment, which are used for solving the problem of discretized task scheduling optimization and ensuring the accuracy of task scheduling.
According to an aspect of the present invention, there is provided a distributed task scheduling method, including:
acquiring the number of tasks to be scheduled currently and the total number of node servers corresponding to the distributed node servers;
based on the task quantity, constructing a task scheduling vector to be optimized, wherein elements in the task scheduling vector correspond to the tasks one to one, and each element refers to a discrete server number corresponding to a node server to which the corresponding task is allocated;
based on a discrete particle swarm optimization mode, a task resource constraint condition and the total number of the node servers, taking the maximum load balance as an optimization target, and performing global position iterative optimization by taking the task scheduling vector as a particle position to obtain an iterated target global optimal position, wherein the discrete particle swarm optimization mode updates the particle position and the particle speed based on selection operation, cross operation and variation operation in the genetic;
and determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position.
According to another aspect of the present invention, there is provided a distributed task scheduling apparatus, including:
the quantity acquisition module is used for acquiring the quantity of the tasks to be scheduled currently and the total quantity of the node servers corresponding to the distributed node servers;
the scheduling vector construction module is used for constructing task scheduling vectors to be optimized based on the task quantity, wherein elements in the task scheduling vectors correspond to the tasks one to one, and each element refers to a discrete server number corresponding to a node server to which the corresponding task is allocated;
the global optimal position obtaining module is used for carrying out global position iterative optimization by taking the maximum load balance degree as an optimization target and taking the task scheduling vector as a particle position on the basis of a discrete particle swarm optimization mode, a task resource constraint condition and the total number of the node servers, so as to obtain an iterated target global optimal position, wherein the discrete particle swarm optimization mode is used for updating the particle position and the particle speed on the basis of selection operation, cross operation and variation operation in a genetic algorithm;
and the scheduling strategy module is used for determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the distributed task scheduling method of any of the embodiments of the invention.
According to the technical scheme, the number of tasks to be scheduled and the total number of node servers corresponding to distributed node servers are obtained; based on the number of tasks, constructing a task scheduling vector to be optimized to realize the construction of a discrete task into the task scheduling vector, and based on an improved discrete particle swarm optimization mode, a task resource constraint condition and the total number of node servers, taking the maximum load balance as an optimization target, and taking the task scheduling vector as a particle position to carry out global position iterative optimization to obtain an iterated target global optimal position; and determining a target scheduling strategy based on a target task scheduling vector corresponding to the target global optimal position so as to realize iteration of the task scheduling vector and obtain the target global optimal position after iteration, thereby determining the target scheduling strategy through the target global optimal position, further solving the problem of discretized task scheduling optimization and ensuring the accuracy of task scheduling.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a distributed task scheduling method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another distributed task scheduling method according to a second embodiment of the present invention;
FIG. 3 is a flowchart of another distributed task scheduling method according to a third embodiment of the present invention;
FIG. 4 is a flowchart of another distributed task scheduling method according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a distributed task scheduling apparatus according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device implementing the distributed task scheduling method according to the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of a distributed task scheduling method according to an embodiment of the present invention, where the method is applicable to a case of reasonably scheduling a discretized task, and the method may be executed by a distributed task scheduling device, where the distributed task scheduling device may be implemented in a form of hardware and/or software, and the distributed task scheduling device may be configured in an electronic device. As shown in fig. 1, the method includes:
and S110, acquiring the number of the tasks to be scheduled and the total number of the node servers corresponding to the distributed node servers.
The task may be a task that needs to occupy CPU resources of the node server to perform calculation, and storage of the calculation data and the data itself needs to occupy memory resources of the node server. A node server may refer to a server in a distributed set of servers for performing assigned tasks.
Specifically, the scheduling server for executing the scheduling task may receive all tasks currently required to be scheduled and sent upstream, determine the number of tasks to be currently scheduled, and obtain the total number of nodes in the distributed set.
And S120, constructing a task scheduling vector to be optimized based on the number of the tasks.
The task scheduling vector may be used to indicate a scheduling vector to which each task is allocated to the corresponding node server. For example, the task scheduling vector may be D = { s 1 ,s 2 …, sm }. Elements in the task scheduling vector correspond to the tasks one to one, and each element refers to a discrete server number corresponding to the node server to which the corresponding task is allocated. For example, the server number may be sj = {1,2, … n }. The elements in the task scheduling vector can only be selected at the discrete server numbers corresponding to the node servers.
Specifically, the scheduling server may construct a task scheduling vector to be optimized based on the number of tasks to be currently scheduled. For example, there are 4 tasks to be scheduled, i.e., m =4, task 1, task 2, task 3, and task 4, respectively; there are 3 node servers, i.e. n =3. Based on the number of tasks, a task scheduling vector D = {1,3,1,2} to be optimized may be constructed, where the task scheduling vector indicates that task 1 is allocated to node server 1, task 2 is allocated to node server 3, task 3 is allocated to node server 1, and task 4 is allocated to node server 2.
S130, based on the discrete particle swarm optimization mode, the task resource constraint condition and the total number of the node servers, the maximum load balance degree is taken as an optimization target, the task scheduling vector is taken as a particle position to carry out global position iterative optimization, and an iterated target global optimal position is obtained.
The discrete particle swarm optimization mode is an optimization calculation technology and simulates the process of finding food by cooperation of a bird swarm. The discrete particle swarm optimization mode updates the positions and the speeds of the particles based on selection operation, cross operation and variation operation in the genetic, so that the discrete task scheduling optimization problem can be solved by using the discrete particle swarm optimization mode. The task resource constraint condition may refer to a condition for constraining a reception task based on a node server resource bearing capacity. For example, the task resource constraint condition may be a resource constraint condition such as an upper limit of CPU resources of the node server or an upper limit of memory resources of the node server. The load balancing degree can be represented by task resource constraint conditions. The target global optimal position may refer to a global position at which the load balance reaches an optimal target after iteration.
Specifically, a load balancing calculation model can be constructed through task resource constraint conditions and the total number of node servers, constructed task scheduling vectors to be optimized are input into a discrete particle swarm optimization model as particle positions to carry out global position iterative optimization until the maximum load balancing degree is reached, namely, an optimization target is reached, and an iterated target global optimal position is obtained.
S140, determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position.
The target task scheduling vector may refer to an optimal task scheduling vector determined based on the target global optimal position. For example, the optimal task scheduling vector may be D' = {1,2,1,1}. The target scheduling policy may refer to an optimal scheduling policy obtained by parsing a target task scheduling vector.
Specifically, a corresponding target task scheduling vector, such as D' = {1,2,1,1}, may be determined based on the obtained target global optimal position, and a target scheduling policy is determined, that is, an optimal allocation manner of 4 tasks is to allocate task 1 to node server 1, task 2 to node server 2, task 3 to node server 1, and task 4 to node server 1.
According to the technical scheme, the number of tasks to be scheduled and the total number of node servers corresponding to distributed node servers are obtained; based on the number of tasks, constructing a task scheduling vector to be optimized to realize the construction of a discrete task into the task scheduling vector, and based on an improved discrete particle swarm optimization mode, a task resource constraint condition and the total number of node servers, taking the maximum load balance as an optimization target, and taking the task scheduling vector as a particle position to carry out global position iterative optimization to obtain an iterated target global optimal position; and determining a target scheduling strategy based on a target task scheduling vector corresponding to the target global optimal position so as to realize iteration of the task scheduling vector and obtain the target global optimal position after iteration, thereby determining the target scheduling strategy through the target global optimal position, further solving the problem of discretized task scheduling optimization and ensuring the accuracy of task scheduling.
Example two
Fig. 2 is a flowchart of a distributed task scheduling method according to a second embodiment of the present invention, and this embodiment describes in detail a process of performing iterative optimization on a global position and obtaining an iterative target global optimal position based on the above embodiment. Wherein explanations of the same or corresponding terms as those of the above-disclosed embodiments are omitted.
As shown in fig. 2, the method includes:
s210, acquiring the number of the tasks to be scheduled and the total number of the node servers corresponding to the distributed node servers.
And S220, constructing a task scheduling vector to be optimized based on the number of the tasks.
The elements in the task scheduling vector correspond to the tasks one to one, and each element refers to a discrete server number corresponding to the node server to which the corresponding task is allocated.
And S230, determining the maximum value of the server number based on the total number of the node servers.
The maximum value of the server numbers can be less than or equal to the total number of the node servers.
Specifically, the scheduling server may determine a maximum server number based on the total number of node servers, for example, the total number of node servers is 3, for example, the maximum server number is 3, and the server number may be s j ={1,2,3}。
S240, initializing the particle position and the particle speed corresponding to each particle in the particle swarm.
And randomly generating an initialization particle swarm in a particle swarm optimization mode. Each particle has two attributes: particle position and particle velocity. Each particle may represent a solution possibility and correspond to a scheduling policy. Each particle in the population of particles corresponds one-to-one to an element in the task scheduling vector. The particle positions are characterized using the task scheduling vectors.
Specifically, based on a discrete particle swarm optimization mode, a task resource constraint condition and the total number of node servers, the maximum load balance degree is taken as an optimization target, and before the task scheduling vector is taken as a particle position to perform first global position iterative optimization, the particle position and the particle speed corresponding to each particle in the particle swarm can be initialized. For example, the initialized particle position may be used
Figure BDA0003884456100000081
Indicating that the initialized particle position can be used
Figure BDA0003884456100000082
Represents; wherein i represents the service of the ith node, and g represents the algebra of iteration; g =0 in the initialized particle position and particle velocity.
It should be noted that the number of predicted tasks, that is, the dimension of the task scheduling vector, is obtained from the task data after the above smoothing processing through an Auto-Regressive Moving Average Model (ARMA), and the prediction result is initialized to obtain a task set.
And S250, determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle.
The current particle position may refer to a particle position corresponding to each particle in the current iteration number. The current load balance degree may refer to a load balance degree corresponding to each particle in the current iteration number.
Specifically, for each particle, the current load balancing degree corresponding to the particle may be determined based on the current particle position corresponding to the particle at the current iteration number.
Illustratively, S250 may include: determining at least one allocation task corresponding to each target node server for executing the task based on the current particle position corresponding to each particle; determining the average node resource utilization rate corresponding to each target node server according to the total execution time of the distributed tasks corresponding to each target node server and the execution time, CPU (Central processing Unit) resource and memory resource required by the execution of each distributed task; determining the average unit resource utilization rate based on the average node resource utilization rates; and determining the current load balance degree corresponding to each particle based on the average node resource utilization rate and the average unit resource utilization rate.
The total time for executing the task may be time required from the beginning of execution to the completion of the execution of the task. For example, the total execution time of tasks distributed by the ith node server is T, T ci 、t mi Respectively, the CPU resource occupancy rate and the memory resource occupancy rate of the task allocated to the ith node server, and the average node resource utilization rate can be determined based on the average CPU utilization rate and the average memory utilization rate and is represented as avg i (ii) a The average CPU utilization rate of the ith node server is avg ci Average memory utilization ratio avg mi
Specifically, for each particle, at least one allocation task corresponding to a target node server for executing the task may be determined based on a current particle position corresponding to the particle; for each target node server, determining the average CPU resource utilization rate corresponding to the target node server according to the total execution time of the distributed tasks corresponding to the target node server and the execution time and CPU resource required by the execution of each distributed task; determining the average memory resource utilization rate corresponding to the target node server according to the total execution time of the distributed tasks corresponding to the target node server and the execution time and the memory resource required by the execution of each distributed task; average processing is carried out on the average CPU resource utilization rate and the average memory resource utilization rateAnd obtaining the average node resource utilization rate corresponding to the target node server. For example, the total time T for executing the assigned task and the time T required for executing the assigned task may be determined based on the total time T for executing the assigned task corresponding to a certain target node server Si CPU resources such as t ci *T si And memory resources such as t mi *T si By the following formula:
Figure BDA0003884456100000091
determining the average CPU utilization rate of the ith node server as avg ci (ii) a And by the following formula:
Figure BDA0003884456100000092
determining the average memory utilization rate of the ith node server as avg mi (ii) a And by the following formula:
Figure BDA0003884456100000093
and determining the average node resource utilization rate corresponding to the ith node server. If n node servers exist in the server set, the resource utilization rate of each average node can be determined according to the following formula:
Figure BDA0003884456100000101
determining the resource utilization rate of an average unit; and based on the determined average node resource utilization rate and the average unit resource utilization rate, the following formula is used for:
Figure BDA0003884456100000102
and determining the current load balance degree LD corresponding to the particles. When the current load balance degree LD value is larger, the load balance degree of the server unit is higher, and the scheduling strategy is more reasonable.
S260, updating the current individual optimal position corresponding to each particle and the current global optimal position corresponding to the particle swarm based on the current load balance degree corresponding to each particle.
The individual optimal position may refer to a particle position at which the current load balance value after each iteration is maximum. The global optimal position may be a particle position where each load balance value is the largest by the current iteration. The global optimal position may be an individual optimal position.
Specifically, the magnitude comparison is performed based on the current load balance degree value corresponding to each particle, the particle position with the largest current load balance degree value is determined and updated to be the current individual optimal position, the current load balance degree value corresponding to the current individual optimal position is compared with the load balance degree value corresponding to the current global optimal position determined in the last iteration, and the particle position with the larger load balance degree value is determined and updated to be the current global optimal position.
And S270, if the current iteration number is smaller than the preset iteration number, updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the maximum value of the server number corresponding to each particle, and returning to execute the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle.
The preset iteration number may refer to a preset iteration number, and is used to limit the total iteration number.
Specifically, if the current iteration number is less than the preset iteration number G, based on the current individual optimal position, the current global optimal position, and the maximum value of the server number corresponding to each particle, the following formula is used:
Figure BDA0003884456100000111
determining and updating the current particle position corresponding to each particle; wherein, f 1 Is to use
Figure BDA0003884456100000112
Mutation operation for mutation probability; f. of 2 Is an OX crossover operation, i.e., a sequential crossover operation; d is the ratio of the number of bits of the g-th generation position different from the g + 1-th generation position to the total number of bits;
Figure BDA0003884456100000113
for subsequent selection operations;
Figure BDA0003884456100000114
selecting an operation for the probability; 0<c 1 <1,0<c 2 <1,c 1 +c 2 =1, with c 1 Is operated alternately with the individual historical optimum position and the current position, with c 2 The probability of the current position is crossed by the global optimal position and the current position; and by the following formula:
Figure BDA0003884456100000115
determining and updating the current particle speed corresponding to each particle; and returning to execute the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle so as to determine the new current load balance degree corresponding to each particle.
And S280, if the current iteration times are equal to the preset iteration times, detecting whether the current global optimal position meets the task resource constraint condition, and if so, determining the current global optimal position as the target global optimal position after iteration.
Among them, the task resource constraint conditions may include but are not limited to: the sum of the CPU resources required by all the tasks of each node server in parallel is less than or equal to the upper limit of the CPU resources of the node server; and the sum of the memory resources required by all the tasks in parallel by each node server is less than or equal to the upper limit of the memory resources of the node server.
Specifically, if the current iteration number is equal to a preset iteration number G, detecting whether the current global optimal position meets a task resource constraint condition, for example, the sum of CPU resources required by all parallel tasks of each node server is less than or equal to the upper limit of CPU resources of the node server, and the sum of memory resources required by all parallel tasks of each node server is less than or equal to the upper limit of memory resources of the node server; if any task resource constraint condition cannot be met, increasing the preset iteration times or increasing the node servers, and continuing to iterate; and if the constraint conditions of all task resources are met, determining the current global optimal position as the target global optimal position after iteration.
S290, determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position.
According to the technical scheme, the maximum value of the server number is determined based on the total number of the node servers; initializing a particle position and a particle speed corresponding to each particle in the particle swarm, wherein the particle position is represented by a task scheduling vector; determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle; updating the current individual optimal position corresponding to each particle and the current global optimal position corresponding to the particle swarm based on the current load balance degree corresponding to each particle; if the current iteration times are smaller than the preset iteration times, updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the maximum value of the server number corresponding to each particle, and returning to execute the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle; if the current iteration times are equal to the preset iteration times, whether the current global optimal position meets the task resource constraint condition is detected, if yes, the current global optimal position is determined to be the target global optimal position after iteration, so that the task scheduling vector is iterated, namely the task scheduling vector is solved, the diversity of algorithm iteration results is improved, the target global optimal position after iteration is obtained, a target scheduling strategy is determined through the target global optimal position, the problem of discretized task scheduling optimization is solved, and the task scheduling accuracy is guaranteed.
EXAMPLE III
Fig. 3 is a flowchart of a distributed task scheduling method according to a third embodiment of the present invention, and this embodiment describes in detail the step "updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position, and the maximum server number value corresponding to each particle" based on the above embodiments. Wherein explanations of the same or corresponding terms as those of the above-disclosed embodiments are omitted.
As shown in fig. 3, the method includes:
s310, acquiring the number of the tasks to be scheduled and the total number of the node servers corresponding to the distributed node servers.
And S320, constructing a task scheduling vector to be optimized based on the number of the tasks.
The elements in the task scheduling vector correspond to the tasks one to one, and each element refers to a discrete server number corresponding to the node server to which the corresponding task is allocated.
And S330, determining the maximum value of the server number based on the total number of the node servers.
S340, initializing a particle position and a particle speed corresponding to each particle in the particle swarm, wherein the particle position is represented by a task scheduling vector.
S350, determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle.
And S360, updating the current individual optimal position corresponding to each particle and the current global optimal position corresponding to the particle swarm based on the current load balance degree corresponding to each particle.
And S370, if the current iteration number is smaller than the preset iteration number, comparing the random probability with the current particle speed and the preset probability corresponding to each particle respectively for each particle.
Wherein the probability of randomnessMay be randomly generated probabilities. The random probability may be represented by r. For example, the random probability may be 0.4, i.e. r =0.4. The preset probability may refer to a probability that a preset magnitude is between 0 and 1. Preset probability available c 1 And c 2 Is represented by, and 0<c 1 <1,0<c 2 <1,c 1 +c 2 And =1. For example, the preset probabilities may be 0.3 and 0.7, i.e., c 1 =0.3、c 2 =0.7。
Specifically, if the current iteration number is less than the preset iteration number G, the formula is applied to each particle
Figure BDA0003884456100000131
Respectively corresponding to the current particle speed of the particle
Figure BDA0003884456100000132
And a preset probability (c) 1 And c 2 ) A comparison is made.
And S380, determining at least one target operation from the mutation operation, the first crossover operation and the second crossover operation based on the comparison result.
Wherein the mutation operation is an operation of mutating the current particle position. f. of 1 So as to make
Figure BDA0003884456100000133
The mutation operation is the mutation operation of mutation probability. f. of 2 Is an OX interleaving operation, i.e. a sequential interleaving operation. The first interleaving operation is an operation of interleaving the current particle position and the current individual optimal position.
Figure BDA0003884456100000134
Is the first interleaving operation. The second interleaving operation is an operation of interleaving the current particle position and the current global optimum position.
Figure BDA0003884456100000141
Is the second interleaving operation. The target operation may refer to an operation for determining an updated current particle position.
Specifically, based on the comparison result, if
Figure BDA0003884456100000142
And r is less than or equal to c 1 Taking the mutation operation and the first crossover operation as target operations; if it is
Figure BDA0003884456100000143
And r > c 1 Then, it indicates
Figure BDA0003884456100000144
And r is less than or equal to c 2 If so, taking the mutation operation and the second crossover operation as target operations; if it is
Figure BDA0003884456100000145
And r is less than or equal to c 1 Taking the first cross operation as a target operation; if it is
Figure BDA0003884456100000146
And r > c 1 Then, it indicates
Figure BDA0003884456100000147
And r is less than or equal to c 2 The second interleaving operation is set as the target operation.
Illustratively, S380 may include: if the random probability is smaller than the current particle speed corresponding to the particle, the mutation operation is taken as a target operation; if the random probability is smaller than or equal to the preset probability, taking the first cross operation as a target operation; and if the random probability is greater than the preset probability, taking the second cross operation as a target operation.
Specifically, if the random probability r is smaller than the current particle velocity corresponding to the particle
Figure BDA0003884456100000148
Then the mutation operation is taken as a target operation; if the random probability r is less than or equal to the preset probability c 1 Taking the first cross operation as a target operation; if the random probability r is larger than the preset probability c 1 Then the second interleaving operation is taken as a target operation。
S390, performing at least one target operation on the current particle position corresponding to the particle, and determining an updated current particle position corresponding to the particle.
Specifically, if there is only one target operation, the target operation is directly executed to determine the updated current particle position corresponding to the particle; if there are two target operations, it is necessary to calculate the first updated particle position based on the mutation operation, and then calculate and determine the updated current particle position corresponding to the particle based on the calculated first updated particle position and another target operation.
Exemplarily, S390 may include: if the target operation comprises a mutation operation and a first cross operation, performing the mutation operation on the current particle position corresponding to the particle, obtaining a mutated particle position, performing the cross operation on the mutated particle position corresponding to the particle and the current individual optimal position, and taking the crossed particle position as an updated current particle position;
and if the target operation comprises a mutation operation and a second intersection operation, performing the mutation operation on the current particle position corresponding to the particle, obtaining a mutated particle position, performing the intersection operation on the mutated particle position corresponding to the particle and the current global optimal position, and taking the intersected particle position as the updated current particle position.
Specifically, if both the mutation operation and the first crossover operation are targeted operations, it is necessary to calculate the positions of the mutated particles based on the mutation operation
Figure BDA0003884456100000151
Then, the position of the varied particle corresponding to the particle and the current individual optimal position are subjected to cross operation, so that the updated current particle position corresponding to the particle is calculated and determined
Figure BDA0003884456100000152
If both the mutation operation and the first crossover operation are targeted, it is necessary to first performComputing mutated particle positions based on mutation operations
Figure BDA0003884456100000153
Then, the varied particle position corresponding to the particle and the current global optimal position are subjected to cross operation, so that the updated current particle position corresponding to the particle is calculated and determined
Figure BDA0003884456100000154
It should be noted that, if the first interleaving operation is the target operation, the interleaving operation needs to be performed on the current particle position corresponding to the particle and the current individual optimal position, so as to calculate and determine the updated current particle position corresponding to the particle
Figure BDA0003884456100000155
If the second cross operation is the target operation, the current particle position corresponding to the particle and the current global optimal position need to be cross-operated, so as to calculate and determine the updated current particle position corresponding to the particle
Figure BDA0003884456100000156
Figure BDA0003884456100000157
S391, if the target element with the number larger than the maximum value of the server number exists in the updated current particle position, updating the element of the target element to obtain the updated current particle position.
Specifically, if it is detected that a target element larger than the maximum value of the server number exists in the updated current particle position, indicating that a task corresponding to the target element is allocated to a non-existent server, the target element needs to be updated to the first server, and the server number is used as the updated current particle position. For example, there are 3 servers; the maximum value of the server number is 3; if it is detected that an element is 4 in the updated current particle position, the element is a target element, the target element is updated to the first server, and the server number is used as the updated current particle position, that is, the updated target element is 1.
And S392, determining the updated current particle speed corresponding to the particle according to the current particle position before updating and the current particle position after updating corresponding to the particle, and returning to execute the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle.
Specifically, the updated current particle speed corresponding to the particle may be determined according to the difference between the current particle position before the update and the current particle position after the update corresponding to the particle, and the operations from S350 to S392 are returned to be executed until it is detected that the target element larger than the maximum value of the server number does not exist in the current particle position after the update, and S393 may be executed.
For example, the step S392 of determining the updated current particle velocity corresponding to the particle according to the current particle position before the update and the current particle position after the update corresponding to the particle may include: comparing the current particle position before updating corresponding to the particle with the current particle position after updating, and determining the target digit of different elements corresponding to the same element position and the ratio between the target digit and the total digit; and determining the updated current particle speed corresponding to the particle according to the ratio.
Specifically, the position of the current particle before updating and the position of the current particle after updating corresponding to the particle may be compared, and the target digit of different elements corresponding to the same element position and the ratio between the target digit and the total digit may be determined
Figure BDA0003884456100000161
According to the ratio, determining the updated current particle velocity corresponding to the particle
Figure BDA0003884456100000162
Namely that
Figure BDA0003884456100000163
And S393, if the current iteration times are equal to the preset iteration times, detecting whether the current global optimal position meets a task resource constraint condition, and if so, determining the current global optimal position as an iterated target global optimal position.
And S394, determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position.
According to the technical scheme, when the current iteration number is smaller than the preset iteration number, the random probability is compared with the current particle speed and the preset probability corresponding to each particle, and at least one target operation is determined from the mutation operation, the first cross operation and the second cross operation based on the comparison result, so that the positions of the particles and the particle speeds are updated through the mutation operation, the first cross operation and the second cross operation, the discretized task scheduling is achieved, meanwhile, the diversity of the algorithm iteration result is improved, and the accuracy of the task scheduling is further guaranteed.
Example four
Fig. 4 is a flowchart of a distributed task scheduling method according to a fourth embodiment of the present invention, where the present embodiment describes in detail the steps of "determining a maximum value of a server number based on the total number of node servers", "updating a current particle position and a current particle speed corresponding to each particle based on a current individual optimal position, a current global optimal position, and a maximum value of the server number corresponding to each particle", and "when the current iteration number is equal to a preset iteration number" based on the above embodiment. Wherein explanations of the same or corresponding terms as those used in the above-disclosed embodiments are omitted.
As shown in fig. 4, the method includes:
and S410, acquiring the number of the tasks to be scheduled currently and the total number of the node servers corresponding to the distributed node servers.
And S420, constructing a task scheduling vector to be optimized based on the number of the tasks.
The elements in the task scheduling vector correspond to the tasks one to one, and each element refers to a discrete server number corresponding to the node server to which the corresponding task is allocated.
And S430, initializing the number of the node servers capable of being distributed.
And the number of the node servers which can be allocated to at present after the initialization is less than the total number of the node servers.
Specifically, if the total number of the node servers is 6, the number of the node servers which can be allocated to the node servers at present after the initial allocation can be 2, so that the node servers which are as few as possible are used for realizing reasonable allocation of tasks, server resources are saved, the number of iterations can be relatively reduced, and the task scheduling efficiency is improved.
S440, determining the maximum value of the current server number of the current iteration based on the number of the node servers which can be currently allocated.
Specifically, if the number of currently assignable node servers is 3, it may be determined that the maximum value of the current server number of the current iteration is 3.
S450, initializing a particle position and a particle speed corresponding to each particle in the particle swarm, wherein the particle position is represented by a task scheduling vector.
And S460, determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle.
And S470, updating the current individual optimal position corresponding to each particle and the current global optimal position corresponding to the particle swarm based on the current load balance degree corresponding to each particle.
And S480, if the current iteration number is smaller than the preset iteration number, updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the current server number maximum value of the current iteration corresponding to each particle, and returning to execute the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle.
Specifically, if the current iteration number is smaller than the preset iteration number G, based on the current individual optimal position, the current global optimal position and the current server number maximum value of the current iteration corresponding to each particle, the following formula is used:
Figure BDA0003884456100000181
determining and updating the current particle position corresponding to each particle; wherein f is 1 So as to make
Figure BDA0003884456100000182
Mutation operation for mutation probability; f. of 2 Is an OX crossover operation, i.e., a sequential crossover operation; d is the ratio of the number of bits of the g-th generation position different from the g + 1-th generation position to the total number of bits;
Figure BDA0003884456100000183
for subsequent selection operations;
Figure BDA0003884456100000184
selecting an operation for the probability; 0<c 1 <1,0<c 2 <1,c 1 +c 2 =1, with c 1 Is operated alternately with the individual historical optimum position and the current position, with c 2 The probability of the current position is crossed by the global optimal position and the current position; and by the following formula:
Figure BDA0003884456100000185
determining and updating the current particle speed corresponding to each particle; and returning to perform the operations of S460 to S480 until the current iteration number is equal to the preset iteration number, and performing S490.
And determining a new current load balance degree corresponding to each particle according to the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle.
And S490, if the current iteration number is equal to the preset iteration number, detecting whether the current global optimal position meets the task resource constraint condition, and if so, determining the current global optimal position as the target global optimal position after iteration.
Exemplarily, "when the current iteration number is equal to the preset iteration number" in S490 may further include: and if the current global optimal position is detected not to meet the task resource constraint condition and the number of the currently allocable node servers is smaller than the total number of the node servers, updating the number of the currently allocable node servers by adding 1, returning to execute the operation of determining the maximum value of the current server number of the current iteration based on the number of the currently allocable node servers.
Specifically, if the current iteration number is equal to a preset iteration number G, detecting whether the current global optimal position meets a task resource constraint condition, for example, the sum of CPU resources required by all parallel tasks of each node server is less than or equal to the upper limit of CPU resources of the node server, and the sum of memory resources required by all parallel tasks of each node server is less than or equal to the upper limit of memory resources of the node server; if any task resource constraint condition cannot be met and the number of currently allocable node servers is smaller than the total number of node servers, the number of currently allocable node servers is updated by adding 1, for example, the number is increased from 2 node servers to 3 node servers, and the operations from S440 to S490 are executed until the task resource constraint condition is met, and S491 may be executed.
S491, a target scheduling strategy is determined based on a target task scheduling vector corresponding to the target global optimal position.
The technical scheme of the invention comprises the steps of initializing the number of assignable node servers, wherein the number of the assignable node servers at present after initialization is smaller than the total number of the node servers, determining the maximum value of the number of the current server of the current iteration based on the number of the assignable node servers at present,
therefore, reasonable task allocation is realized by using the node servers as few as possible, server resources are saved, iteration times can be relatively reduced, and task scheduling efficiency is improved.
The following is an embodiment of the distributed task scheduling apparatus provided in the embodiments of the present invention, and the apparatus and the distributed task scheduling method in the embodiments described above belong to the same inventive concept, and details that are not described in detail in the embodiment of the distributed task scheduling apparatus may refer to the embodiment of the distributed task scheduling method described above.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a distributed task scheduling apparatus according to a fifth embodiment of the present invention. As shown in fig. 5, the apparatus includes: a quantity obtaining module 510, a scheduling vector construction module 520, a global optimal position obtaining module 530, and a scheduling policy module 540.
The number obtaining module 510 is configured to obtain the number of tasks to be currently scheduled and the total number of node servers corresponding to the distributed node servers; a scheduling vector construction module 520, configured to construct a task scheduling vector to be optimized based on the number of tasks, where elements in the task scheduling vector correspond to the tasks one to one, and each element is a discrete server number corresponding to a node server to which a corresponding task is allocated; a global optimal position obtaining module 530, configured to perform global position iterative optimization on a task scheduling vector as a particle position by taking a maximum load balance as an optimization target based on a discrete particle swarm optimization mode, a task resource constraint condition and a total number of node servers, and obtain an iterated target global optimal position, where the discrete particle swarm optimization mode updates the particle position and the particle speed based on a selection operation, a crossover operation and a variation operation in a genetic algorithm; and the scheduling policy module 540 is configured to determine a target scheduling policy based on the target task scheduling vector corresponding to the target global optimal position.
According to the technical scheme, the number of tasks to be scheduled and the total number of node servers corresponding to distributed node servers are obtained; based on the number of tasks, constructing a task scheduling vector to be optimized to realize the construction of a discrete task into the task scheduling vector, and based on an improved discrete particle swarm optimization mode, a task resource constraint condition and the total number of node servers, taking the maximum load balance as an optimization target, and taking the task scheduling vector as a particle position to carry out global position iterative optimization to obtain an iterated target global optimal position; and determining a target scheduling strategy based on a target task scheduling vector corresponding to the target global optimal position so as to realize iteration of the task scheduling vector and obtain the target global optimal position after iteration, thereby determining the target scheduling strategy through the target global optimal position, further solving the problem of discretized task scheduling optimization and ensuring the accuracy of task scheduling.
Optionally, the global optimal position obtaining module 530 may include:
the maximum number determining submodule is used for determining the maximum number of the servers based on the total number of the node servers;
the initialization submodule is used for initializing a particle position and a particle speed corresponding to each particle in the particle swarm, wherein the particle position is represented by using a task scheduling vector;
the load balance degree determining submodule is used for determining the current load balance degree corresponding to each particle on the basis of the current particle position corresponding to each particle;
the optimal position updating submodule is used for updating the current individual optimal position corresponding to each particle and the current global optimal position corresponding to the particle swarm based on the current load balance degree corresponding to each particle;
the particle data updating submodule is used for updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the server number maximum value corresponding to each particle if the current iteration number is smaller than the preset iteration number, and returning to execute the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle;
and the target global optimal position determining submodule is used for detecting whether the current global optimal position meets the task resource constraint condition or not if the current iteration times are equal to the preset iteration times, and determining the current global optimal position as the target global optimal position after iteration if the current global optimal position meets the task resource constraint condition.
Optionally, the load balance determination submodule is specifically configured to: determining at least one allocation task corresponding to each target node server for executing the task based on the current particle position corresponding to each particle; determining the average node resource utilization rate corresponding to each target node server according to the total execution time of the distributed tasks corresponding to each target node server and the execution time, CPU resources and memory resources required by the execution of each distributed task; determining the average unit resource utilization rate based on the average node resource utilization rates; and determining the current load balance degree corresponding to each particle based on the average node resource utilization rate and the average unit resource utilization rate.
Optionally, the particle data update sub-module may include:
the probability comparison unit is used for comparing the random probability with the current particle speed and the preset probability corresponding to each particle respectively for each particle;
a target operation determination unit configured to determine at least one target operation from a mutation operation, a first crossover operation, and a second crossover operation based on the comparison result, wherein the mutation operation is an operation of mutating the current particle position, the first crossover operation is an operation of crossing the current particle position and the current individual optimal position, and the second crossover operation is an operation of crossing the current particle position and the current global optimal position;
a current particle position determining unit, configured to perform at least one target operation on a current particle position corresponding to the particle, and determine an updated current particle position corresponding to the particle;
a current particle position obtaining unit, configured to update an element of a target element if it is detected that the updated current particle position includes the target element larger than the maximum server number, and obtain an updated current particle position;
and the current particle speed determining unit is used for determining the updated current particle speed corresponding to the particle according to the current particle position before updating and the current particle position after updating corresponding to the particle.
Optionally, the target operation determining unit is specifically configured to: if the random probability is smaller than the current particle speed corresponding to the particle, the mutation operation is taken as a target operation; if the random probability is smaller than or equal to the preset probability, taking the first cross operation as a target operation; and if the random probability is greater than the preset probability, taking the second cross operation as a target operation.
Optionally, the current particle position determining unit is specifically configured to: if the target operation comprises a mutation operation and a first cross operation, performing the mutation operation on the current particle position corresponding to the particle, obtaining a mutated particle position, performing the cross operation on the mutated particle position corresponding to the particle and the current individual optimal position, and taking the crossed particle position as an updated current particle position; and if the target operation comprises a mutation operation and a second intersection operation, performing the mutation operation on the current particle position corresponding to the particle, obtaining a mutated particle position, performing intersection operation on the mutated particle position corresponding to the particle and the current global optimal position, and taking the intersected particle position as an updated current particle position.
Optionally, the current particle velocity determining unit is specifically configured to: comparing the current particle position before updating corresponding to the particle with the current particle position after updating, and determining the target digit of different elements corresponding to the same element position and the ratio between the target digit and the total digit; and determining the updated current particle speed corresponding to the particle according to the ratio.
Optionally, the maximum-number-value determining sub-module is specifically configured to: initializing the number of assignable node servers, wherein the number of the assignable node servers after initialization is smaller than the total number of the node servers; determining the maximum value of the current server number of the current iteration based on the number of the node servers which can be currently allocated;
the particle data update sub-module is specifically configured to: updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the current server number maximum value of the current iteration corresponding to each particle;
the global optimal position obtaining module 530 further includes:
and the server number updating submodule is used for updating the number of the currently allocable node servers by adding 1 if the current global optimal position is detected not to meet the task resource constraint condition and the number of the currently allocable node servers is smaller than the total number of the node servers, and returning to execute the operation of determining the maximum value of the current server number of the current iteration based on the number of the currently allocable node servers.
The distributed task scheduling device provided by the embodiment of the invention can execute the distributed task scheduling method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the distributed task scheduling apparatus, the modules included in the embodiment are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, the specific names of the functional modules are only for convenience of distinguishing from each other and are not used for limiting the protection scope of the present invention.
EXAMPLE six
FIG. 6 illustrates a schematic structural diagram of an electronic device 10 that may be used to implement an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 11 performs the various methods and processes described above, such as the distributed task scheduling method.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A distributed task scheduling method, comprising:
acquiring the number of tasks to be scheduled currently and the total number of node servers corresponding to the distributed node servers;
based on the task quantity, constructing a task scheduling vector to be optimized, wherein elements in the task scheduling vector correspond to the tasks one to one, and each element refers to a discrete server number corresponding to a node server to which the corresponding task is allocated;
based on a discrete particle swarm optimization mode, a task resource constraint condition and the total number of the node servers, taking the maximum load balance as an optimization target, and performing global position iterative optimization by taking the task scheduling vector as a particle position to obtain an iterated target global optimal position, wherein the discrete particle swarm optimization mode updates the particle position and the particle speed based on selection operation, cross operation and variation operation in the genetic;
and determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position.
2. The method of claim 1, wherein based on a discrete particle swarm optimization mode, a task resource constraint condition and the total number of the node servers, with a maximized load balance as an optimization target, performing global position iterative optimization with the task scheduling vector as a particle position to obtain an iterated target global optimal position, comprising:
determining a maximum server number based on the total number of the node servers;
initializing a particle position and a particle speed corresponding to each particle in a particle swarm, wherein the particle position is characterized by the task scheduling vector;
determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle;
updating the current individual optimal position corresponding to each particle and the current global optimal position corresponding to the particle swarm based on the current load balance degree corresponding to each particle;
if the current iteration number is smaller than the preset iteration number, updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the maximum value of the server number corresponding to each particle, and returning to execute the operation of determining the current load balance degree corresponding to each particle based on the current particle position corresponding to each particle;
and if the current iteration times are equal to the preset iteration times, detecting whether the current global optimal position meets the task resource constraint condition, and if so, determining the current global optimal position as the target global optimal position after iteration.
3. The method of claim 2, wherein determining the current load balancing level for each particle based on the current particle location for each particle comprises:
determining at least one allocation task corresponding to each target node server for executing the task based on the current particle position corresponding to each particle;
determining the average node resource utilization rate corresponding to each target node server according to the total execution time of the distributed tasks corresponding to each target node server and the execution time, CPU (Central processing Unit) resource and memory resource required by the execution of each distributed task;
determining the average unit resource utilization rate based on each average node resource utilization rate;
and determining the current load balance degree corresponding to each particle based on the average node resource utilization rate and the average unit resource utilization rate.
4. The method of claim 2, wherein updating the current particle position and the current particle velocity for each particle based on the current individual optimal position, the current global optimal position, and the server number maximum for each particle comprises:
for each particle, comparing the random probability with the current particle speed and the preset probability corresponding to the particle;
determining at least one target operation from a mutation operation, a first crossover operation and a second crossover operation based on the comparison result, wherein the mutation operation is an operation of mutating the current particle position, the first crossover operation is an operation of crossing the current particle position and the current individual optimal position, and the second crossover operation is an operation of crossing the current particle position and the current global optimal position;
performing the at least one target operation on the current particle position corresponding to the particle, and determining an updated current particle position corresponding to the particle;
if the target element larger than the maximum value of the server number exists in the updated current particle position, element updating is carried out on the target element to obtain the updated current particle position;
and determining the updated current particle speed corresponding to the particle according to the current particle position before updating and the current particle position after updating corresponding to the particle.
5. The method of claim 4, wherein determining at least one target operation from the mutation operation, the first interleaving operation, and the second interleaving operation based on the comparison comprises:
if the random probability is smaller than the current particle speed corresponding to the particle, the mutation operation is taken as a target operation;
if the random probability is smaller than or equal to the preset probability, taking the first cross operation as a target operation;
and if the random probability is greater than the preset probability, taking the second cross operation as a target operation.
6. The method of claim 4, wherein performing the at least one target operation on the current particle position corresponding to the particle to determine the updated current particle position corresponding to the particle comprises:
if the target operation comprises a variation operation and a first cross operation, performing the variation operation on the current particle position corresponding to the particle, obtaining the varied particle position, performing the cross operation on the varied particle position corresponding to the particle and the current individual optimal position, and taking the crossed particle position as the updated current particle position;
and if the target operation comprises a mutation operation and a second intersection operation, performing the mutation operation on the current particle position corresponding to the particle to obtain a mutated particle position, performing the intersection operation on the mutated particle position corresponding to the particle and the current global optimal position, and taking the intersected particle position as the updated current particle position.
7. The method of claim 4, wherein determining the updated current particle velocity corresponding to the particle based on the pre-update current particle position and the updated current particle position corresponding to the particle comprises:
comparing the current particle position before updating corresponding to the particle with the current particle position after updating, and determining the target digit of different elements corresponding to the same element position and the ratio between the target digit and the total digit;
and determining the updated current particle speed corresponding to the particle according to the ratio.
8. The method of claim 2, wherein determining a maximum server number based on the total number of node servers comprises:
initializing the number of assignable node servers, wherein the number of the assignable node servers after initialization is smaller than the total number of the node servers;
determining the maximum value of the current server number of the current iteration based on the number of the node servers which can be currently allocated;
the updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the maximum value of the server number corresponding to each particle comprises:
updating the current particle position and the current particle speed corresponding to each particle based on the current individual optimal position, the current global optimal position and the current server number maximum value of the current iteration corresponding to each particle;
when the current iteration number is equal to the preset iteration number, the method further comprises the following steps:
and if the current global optimal position is detected not to meet the task resource constraint condition and the number of the currently allocable node servers is smaller than the total number of the node servers, updating the number of the currently allocable node servers by adding 1, returning to execute the operation of determining the maximum value of the current server number of the current iteration based on the number of the currently allocable node servers.
9. A distributed task scheduler, comprising:
the quantity acquisition module is used for acquiring the quantity of the tasks to be scheduled currently and the total quantity of the node servers corresponding to the distributed node servers;
the scheduling vector construction module is used for constructing task scheduling vectors to be optimized based on the task quantity, wherein elements in the task scheduling vectors correspond to the tasks one to one, and each element refers to a discrete server number corresponding to a node server to which the corresponding task is allocated;
the global optimal position acquisition module is used for carrying out global position iterative optimization by taking the maximum load balance degree as an optimization target and taking the task scheduling vector as a particle position on the basis of a discrete particle swarm optimization mode, a task resource constraint condition and the total number of the node servers, so as to obtain an iterated target global optimal position, wherein the discrete particle swarm optimization mode is used for updating the particle position and the particle speed on the basis of selection operation, cross operation and variation operation in a legacy;
and the scheduling strategy module is used for determining a target scheduling strategy based on the target task scheduling vector corresponding to the target global optimal position.
10. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the distributed task scheduling method of any one of claims 1-8.
CN202211241630.9A 2022-10-11 2022-10-11 Distributed task scheduling method and device and electronic equipment Pending CN115509715A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211241630.9A CN115509715A (en) 2022-10-11 2022-10-11 Distributed task scheduling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211241630.9A CN115509715A (en) 2022-10-11 2022-10-11 Distributed task scheduling method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115509715A true CN115509715A (en) 2022-12-23

Family

ID=84510150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211241630.9A Pending CN115509715A (en) 2022-10-11 2022-10-11 Distributed task scheduling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115509715A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117631615A (en) * 2023-10-12 2024-03-01 中国电建集团山东电力管道工程有限公司 Production workshop data acquisition and processing method and system based on Internet of things equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117631615A (en) * 2023-10-12 2024-03-01 中国电建集团山东电力管道工程有限公司 Production workshop data acquisition and processing method and system based on Internet of things equipment

Similar Documents

Publication Publication Date Title
Ghobaei-Arani A workload clustering based resource provisioning mechanism using Biogeography based optimization technique in the cloud based systems
Gan et al. Genetic simulated annealing algorithm for task scheduling based on cloud computing environment
Izakian et al. A novel particle swarm optimization approach for grid job scheduling
CN105656999B (en) A kind of cooperation task immigration method of energy optimization in mobile cloud computing environment
Jayanetti et al. Deep reinforcement learning for energy and time optimized scheduling of precedence-constrained tasks in edge–cloud computing environments
Song et al. Scheduling workflows with composite tasks: A nested particle swarm optimization approach
CN110233802B (en) Method for constructing block chain structure with one main chain and multiple side chains
CN108768716A (en) A kind of micro services routing resource and device
Velasquez et al. A rank-based mechanism for service placement in the fog
Chen et al. Tology-aware optimal data placement algorithm for network traffic optimization
Subramoney et al. Multi-swarm PSO algorithm for static workflow scheduling in cloud-fog environments
Lakhan et al. Deadline aware and energy-efficient scheduling algorithm for fine-grained tasks in mobile edge computing
Wangsom et al. Multi-objective scientific-workflow scheduling with data movement awareness in cloud
Reddy et al. MACO-MOTS: modified ant colony optimization for multi objective task scheduling in Cloud environment
NZanywayingoma et al. Effective task scheduling and dynamic resource optimization based on heuristic algorithms in cloud computing environment
CN115509715A (en) Distributed task scheduling method and device and electronic equipment
CN116633801A (en) Resource scheduling method, device, system and related equipment
Zhou et al. Deep reinforcement learning-based algorithms selectors for the resource scheduling in hierarchical cloud computing
CN115220916A (en) Automatic computing power scheduling method, device and system for video intelligent analysis platform
CN117271101A (en) Operator fusion method and device, electronic equipment and storage medium
Devagnanam et al. Design and development of exponential lion algorithm for optimal allocation of cluster resources in cloud
Li et al. On scheduling of high-throughput scientific workflows under budget constraints in multi-cloud environments
Pasdar et al. Data-aware scheduling of scientific workflows in hybrid clouds
Verma et al. A survey on energy‐efficient workflow scheduling algorithms in cloud computing
Abdellah et al. RAP-G: Reliability-aware service placement using genetic algorithm for deep edge computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination