CN111506394B - Docker Swarm container scheduling method and system - Google Patents

Docker Swarm container scheduling method and system Download PDF

Info

Publication number
CN111506394B
CN111506394B CN202010295066.3A CN202010295066A CN111506394B CN 111506394 B CN111506394 B CN 111506394B CN 202010295066 A CN202010295066 A CN 202010295066A CN 111506394 B CN111506394 B CN 111506394B
Authority
CN
China
Prior art keywords
subset
scheduling
container
node
scheduling scheme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010295066.3A
Other languages
Chinese (zh)
Other versions
CN111506394A (en
Inventor
黄剑锋
林昊
苏庆
刘添添
李小妹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202010295066.3A priority Critical patent/CN111506394B/en
Publication of CN111506394A publication Critical patent/CN111506394A/en
Application granted granted Critical
Publication of CN111506394B publication Critical patent/CN111506394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

According to the Docker switch container scheduling method and system, the performance of the container node load state is comprehensively measured through five indexes including CPU, memory, I/O load, network broadband and disk space, the performance is closer to the actual container task deployment condition, a scheduling scheme which enables the node load to be balanced most is determined by applying a threshold-based double-strategy mixed frog-leaping algorithm, different local search strategies are selected according to the threshold of a subset, a learning process among groups is added, the search range of the sub-groups and the search set of the whole group are enlarged, local search precision is improved, and the phenomenon of trapping in local optimum is avoided; the scheduling method of the invention adopts a parallelization deployment thought, can effectively reduce resource fragments generated during container task deployment, and maximally utilizes server resources.

Description

Docker Swarm container scheduling method and system
Technical Field
The invention belongs to the field of computer computing virtualization, and particularly relates to a Docker Swart container scheduling method and a scheduling system.
Background
The Docker switch is a widely applied container arrangement system, the switch is used for managing Docker clusters, containers are operated on proper nodes according to scheduling strategies, the Docker switch comprises three scheduling strategies Spread, binpack and Random, however, the algorithm of the three strategies is too simple, for example, the resource requirements of the tasks are not considered, the tasks are not distinguished when the nodes are allocated, and the tasks are optimized according to load modes and available resource characteristics of the nodes; the multi-task scheduling is not supported, so that popularization on a public cloud computing platform is not facilitated; the performance index for measuring the load state of the node is incomplete and does not consider the hardware performance of the node.
The optimization of the Docker Swart scheduling strategy algorithm is not lacked in the prior art, and the method still has the following defects of considering the use condition of real-time resources, adopting dynamic weight distribution nodes, optimizing particle Swarm and the like: the method has the advantages of no parallelization deployment scheduling capability, low precision, large error and incomplete analysis of the performance consumption of the nodes and the containers.
Disclosure of Invention
Based on the method and the system, the mixed frog-leaping algorithm with the double local search strategy is adopted to deploy container tasks on the Docker clusters of the limited nodes, so that the loads of the node clusters are balanced most, and parallelized deployment is realized.
The invention discloses a Docker Swart container scheduling method, which comprises the following steps:
s1, randomly distributing execution nodes for each scheduling scheme, and calculating a scheduling scheme set S= (T) according to the total resource load value of each node 1 ,T 2 ,…,T m ) Each scheduling scheme T in (1) i I=1, 2, …, m, resources including CPU, memory, I/O load, network bandwidth and disk space;
s2, dividing m scheduling schemes into a plurality of subsets, and updating the subsets according to policy thresholds of the subsets;
s3, remixing all the scheduling schemes into a new set, repeating the step S1, determining a current global optimal solution according to the adaptability of each scheduling scheme, stopping searching and outputting the current global optimal solution when the global search termination condition is met, and otherwise, returning to the step S2.
Preferably, step S1 is preceded by:
s0. initializing a scheduling scheme set s= (T 1 ,T 2 ,…,T m ) Each scheduling scheme includes n tasks, i.e., T i =(I 1 ,I 2 ,…,I n )。
Preferably, the calculation of the total resource load value of the node is as follows
Figure GDA0004105536560000021
Omega in formula (II) i Weight of resource load, x i Is the load value of the resource at that node.
Preferably, the scheduling scheme T i Is calculated as
Figure GDA0004105536560000022
Wherein N is the total number of nodes, L i For the total resource load value of the ith node, < >>
Figure GDA0004105536560000023
Is the average of the total resource load values of all nodes.
Preferably, step S2 further comprises:
s201, arranging m scheduling schemes according to a descending order of fitness and equally dividing the m scheduling schemes into d subsets;
s202, updating the subset according to the strategy threshold value of the subset.
Preferably, step S201 includes:
the m scheduling schemes are arranged in a descending order according to the size of the fitness and are divided into d subsets, each subset comprises v scheduling schemes, m=d×v is met, the specific dividing process is that the scheduling scheme with the sequence number of a+ (r-1) d enters the a-th subset, and a=1, 2, …, d and r are positive integers.
Preferably, step S202 includes:
the calculation of the policy threshold for the subset is to determine whether the subset satisfies f (T b )-f(T w )>f(T) A Wherein f (T) b ) For the optimal fitness of the subset, f (T w ) For the worst fitness of the subset, f (T) A For the average fitness of the subset, the worst solution within the subset is updated if and only if satisfied using the following equation,
d s1 =rand()×[rand 2 (0,1)×T b p +(1-rand())×T g ) (1)
T n1 =T w +d s1 (2)
two solutions of the sub-differences within the subset are also updated with the following formula,
d s2 =rand()×(T b p+1 -T w2,w3 ) (3)
T n1,n2 =T w2,w3 +d s2 (4)
wherein d s1 And d s2 Representing the movement step size, rand () represents taking [0,1 ]]Random number, T between b p And T b p+1 Optimal solutions for the p-th subset and the p+1th subset, respectively, T w Is the worst solution for the p-th subset and is T after updating n1 Substitution, T w2,w3 Two solutions of the sub-difference for the p-th subset and are T-updated n1,n2 Substitution, T g Is the optimal solution of the scheduling scheme set S.
Preferably, step S202 further includes:
when the subset does not satisfy f (T b )-f(T w )>f(T) A Randomly selecting two subsets, performing cross mutation on the optimal solutions of the two subsets by using the following formula,
Figure GDA0004105536560000031
Figure GDA0004105536560000032
wherein rand () represents taking [0,1 ]]A random number between the two random numbers,
Figure GDA0004105536560000033
is the optimal solution of subset x and is mutated after cross
Figure GDA0004105536560000034
Substitution (S)>
Figure GDA0004105536560000035
Is the optimal solution for subset y and is +.>
Figure GDA0004105536560000036
And (3) substitution.
Preferably, the global search termination condition is that the maximum number of iterations k is reached.
Preferably, when the resource load is a CPU, its resource load value at node i is represented by the formula
Figure GDA0004105536560000037
Calculation of x 1 CPU consumption value, N, representing node i i Indicating the number of tasks performed by a node, U j Represents the CPU occupancy required by task j, C representsCPU core number of node i, HZ i Representing the CPU frequency of node i.
Preferably, when the resource is memory, its resource load value at node i is represented by the formula
Figure GDA0004105536560000038
Calculation of x 2 Representing the memory consumption value of node i, N i Mem, which represents the number of tasks performed by a node j Represents the memory occupation amount required by task j, mem i Representing the memory size of node i.
Preferably, when the resource is I/O, its resource load value at node I is represented by the formula
Figure GDA0004105536560000039
Calculation of x 3 Representing I/O capacity of node I, N i IOPS (input/output) indicating number of tasks performed by node j IOPS representing input/output per second for task j i max Representing input/output per second at full load of node i, wherein +.>
Figure GDA00041055365600000310
t s Representing the latency of a task j single I/O, d representing the size of the single I/O data block, t w Representing the addressing time of the hard disk, r representing the rotational speed of the hard disk, l representing the maximum transmission rate of the hard disk, +.>
Figure GDA0004105536560000041
Preferably, when the resource is network broadband, its resource load value at node i is represented by the formula
Figure GDA0004105536560000042
Calculation of x 4 Network broadband load value, N, representing node i i Indicating the number of tasks a node performs, net j Represents the amount of network bandwidth, net, required for task j i Representing the network bandwidth size of node i.
Preferably, when the resource is disk space, its resource load value at node i is defined byFormula (VI)
Figure GDA0004105536560000043
Calculation of x 5 Representing the disk usage of node i, N i Representing the number of tasks performed by a node, disk j Representing the Disk occupancy required by task j, disk i The disk capacity size of node i is indicated.
On the other hand, the invention also provides a dispatching system for implementing the method for dispatching the Docker Swart container, which comprises a container node, a fitness calculation unit, an updating unit and a global searching unit,
the fitness calculating unit is used for calculating a scheduling scheme set s= (T) according to the total resource load value of each container node 1 ,T 2 ,...,T m ) Each scheduling scheme T in (1) i I=1, 2,., m, resources including CPU, memory, I/O load, network bandwidth, and disk space;
the updating unit is used for updating the subset according to the strategy threshold value of the scheduling scheme subset;
and the global searching unit is used for determining and outputting a global optimal solution according to the adaptability of each scheduling scheme.
Preferably, the foregoing scheduling system further includes an initializing unit configured to initialize a scheduling scheme set s= (T) 1 ,T 2 ,…,T m ) Each scheduling scheme includes n tasks, i.e., T i =(I 1 ,I 2 ,...,I n )。
From the above technical scheme, the invention has the following beneficial effects:
according to the Docker switch container scheduling method and system, the performance of the container node load state is comprehensively measured through five indexes including CPU, memory, I/O load, network broadband and disk space, the performance is closer to the actual container task deployment condition, a mixed frog-leaping algorithm with a double local search strategy is adopted to determine a scheduling scheme which enables the node load to be balanced most, different local search strategies are selected according to the threshold value of a subset, a learning process among groups is added, the search range of the sub-groups and the search set of the whole group are enlarged, the local search precision is improved, and the phenomenon of sinking local optimum is avoided; the scheduling method of the invention adopts a parallelization deployment thought, can effectively reduce resource fragments generated during container task deployment, and maximally utilizes server resources.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a Docker Swart container scheduling system in accordance with one embodiment of the present invention
FIG. 2 is a block diagram of a Docker Swart container scheduling system in accordance with another embodiment of the present invention
FIG. 3 is a flow chart of a method for scheduling a Docker Swart container according to one embodiment of the invention
FIG. 4 is a schematic diagram of a search space range of a dual local search strategy according to one embodiment of the present invention
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present embodiment provides a dock switch container scheduling system, which includes a plurality of container nodes, a fitness calculating unit, an updating unit and a global searching unit,
for a single scheduling scheme, the execution node is randomly allocated to the single scheduling scheme, and the fitness calculating unit calculates a scheduling scheme set S= (T) according to the total resource load value of each container node 1 ,T 2 ,…,T m ) Each scheduling scheme T in (1) i I=1, 2, …, m, resources including CPU, memory, I/O load, network bandwidth and disk space;
the updating unit is used for updating the subset according to the strategy threshold value of the scheduling scheme subset;
and the global searching unit is used for determining and outputting a global optimal solution according to the adaptability of each scheduling scheme.
As shown in fig. 2, in a further embodiment the scheduling system further comprises an initializing unit for initializing the scheduling scheme set s= (T 1 ,T 2 ,…,T m ) Each scheduling scheme includes n tasks, i.e., T i =(I 1 ,I 2 ,…,I n ) I.e. n tasks per scheduling scheme are randomly assigned to executing nodes in the server.
Referring to fig. 3, the present embodiment provides a method for scheduling a dock switch container, where the scheduling steps include:
initializing a scheduling scheme set s= (T 1 ,T 2 ,...,T m ) Each scheduling scheme includes n tasks, i.e., T i =(I 1 ,I 2 ,…,I n );
Randomly distributing each scheduling scheme to i nodes in a server, calculating the total resource load value of each node, wherein the calculation formula is as follows
Figure GDA0004105536560000061
Omega in formula (II) i Weight of resource load, x i Is the value of the resource at the node, and +.>
Figure GDA0004105536560000062
Since each index has the same importance to the evaluation of node performance, the weight assigned to each index load value is the same, i.e., ω 1 =ω 2 =ω 3 =ω 4 =ω 5 =0.2。
The evaluation of resource load in this embodiment takes into account CPU, memory, I/O load, network bandwidth and disk space,
the CPU consumption value is represented by the formula
Figure GDA0004105536560000063
Calculation of N i Indicating the number of tasks performed by a node, U j Represents the CPU occupation amount required by task j, C represents the CPU core number of node i, and HZ i Representing the CPU frequency of node i; />
The memory consumption value is represented by the formula
Figure GDA0004105536560000064
Calculation of N i Mem, which represents the number of tasks performed by a node j Represents the memory occupation amount required by task j, mem i Representing the memory size of node i;
the input output quantity (or read-write times) IOPS per second is one of the main indexes for measuring the performance of the hard disk, and is mainly influenced by the addressing time, rotation delay, transmission time and waiting time of the hard disk, wherein the addressing time and the rotation delay are fixed parameters when the hard disk leaves a factory; the transmission time is related to the maximum transmission rate of the hard disk and the size of the single I/O data block; latency is the time that I/O is not performed in a single I/O operation. Thus, on different nodes, the demands of the container tasks on the hard disk I/O resources are adaptively changed along with the performance parameters of the hard disk of the nodes,
the I/O load value is represented by the formula
Figure GDA0004105536560000065
Calculation of N i IOPS (input/output) indicating number of tasks performed by node j IOPS representing input/output per second for task j i max Representing input/output per second at full load of node i, where
Figure GDA0004105536560000066
t s Representing the latency of a task j single I/O, d representing the size of the single I/O data block, t w Representing the addressing time of the hard disk, r representing the rotational speed of the hard disk, l representing the maximum transmission rate of the hard disk, +.>
Figure GDA0004105536560000067
The network broadband consumption value is represented by the formula
Figure GDA0004105536560000071
Calculation of N i Indicating the number of tasks a node performs, net j Represents the amount of network bandwidth, net, required for task j i Representing the network bandwidth size of node i;
the usage amount of the magnetic disk is represented by the formula
Figure GDA0004105536560000072
Calculation of N i Representing the number of tasks performed by a node, disk j Representing the Disk occupancy required by task j, disk i The disk capacity size of node i is indicated.
Determining the total resource load value L of the node i according to the consumption calculation of the five resources i And calculate the fitness of the scheduling scheme, the fitness is calculated as
Figure GDA0004105536560000073
Wherein N is the total number of nodes, L i For the total resource load value of the ith node, < >>
Figure GDA0004105536560000074
Is the average of the total resource load values of all nodes.
The optimal solution is when the scheduling scheme makes each container node of the server in the most balanced state, namely
Figure GDA0004105536560000075
The adaptation of the scheduling scheme is minimal at this time.
The m scheduling schemes are arranged in a descending order according to the size of the fitness and are divided into d subsets, each subset comprises v scheduling schemes, m=d×v is met, the specific dividing process is that the scheduling scheme with the sequence number of a+ (r-1) d enters the a-th subset, a=1, 2.
Local search is performed in the subset, the embodiment optimizes the local search strategy, selects different search update strategies through a threshold value, and judges the difference between the distance between the optimal fitness of the subset and the worst fitness and the average fitness of the subset, namely when f (T) b )-f(T w )>f(T) A When the method is used, the distance between the local optimal solution and the local worst solution is longer, the search space is larger, the mutual learning strategy is adopted, the worst solution in the subset is updated by the following formula,
d s1 =rand()×[rand 2 (0,1)×T b p +(1-rand())×T g ) (1)
T n1 =T w +d s1 (2)
two solutions of the sub-differences within the subset are also updated with the following formula,
d s2 =rand()×(T b p+1 -T w2,w3 ) (3)
T n1,n2 =T w2,w3 +d s2 (4) Wherein d s1 And d s2 Representing the movement step size, rand () represents taking [0,1 ]]Random number, T between b p And T b p+1 Optimal solutions for the p-th subset and the p+1th subset, respectively, T w Is the worst solution for the p-th subset and is T after updating n1 Substitution, T w2,w3 Two solutions of the sub-difference for the p-th subset and are T-updated n1,n2 Substitution, T g The optimal solution of the scheduling scheme set S;
when f (T) b )-f(T w )<f(T) A When the method is used, the search space is smaller, the search space needs to be enlarged, two subsets are randomly selected by adopting a cross mutation strategy, the cross mutation is carried out on the optimal solutions of the two subsets by using the following formula,
Figure GDA0004105536560000081
Figure GDA0004105536560000082
wherein rand () represents taking [0,1 ]]A random number between the two random numbers,
Figure GDA0004105536560000083
is the optimal solution of subset x and is mutated after cross
Figure GDA0004105536560000084
Substitution (S)>
Figure GDA0004105536560000085
Is the optimal solution for subset y and is +.>
Figure GDA0004105536560000086
And (3) substitution.
Mixing all the scheduling schemes as a new set, recalculating the fitness of each scheduling scheme, determining the current global optimal solution according to the descending order of the fitness of each scheduling scheme, stopping searching and outputting the current global optimal solution when the global search termination condition is met, namely the maximum iteration number k is reached, otherwise repeating the steps of subset division, local searching and local solution updating.
Because the traditional mixed frog-leaping algorithm has too small search space range, the problem of local optimum is easily trapped, however, excellent individuals among groups can also provide learning directions and information for adjacent groups, so that the local search strategy can be added with the learning strategy among groups, the search range of sub-groups and the search opportunity of the groups can be enlarged, and the search precision can be improved. However, the scope of the mutual learning strategy can only ensure that new individuals are composed by existing individuals, so that the scope of the mutual learning can be expanded by introducing the cross variation strategy. As shown in fig. 4, the search space range of the dual search strategy provided in this embodiment is fully covered, so as to solve the defect that the search space of the conventional algorithm is too small, and fully combine the advantages of the mutual learning strategy and the cross mutation strategy, so as to solve the defect that local optimization is easy.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A method for scheduling a Docker Swarm container, comprising:
s1, randomly distributing execution nodes for each scheduling scheme, and calculating a scheduling scheme set S= (T) according to the total resource load value of the nodes 1 ,T 2 ,…,T m ) Each scheduling scheme T in (1) i I=1, 2, …, m, the resources including CPU, memory, I/O load, network bandwidth and disk space;
s2, dividing m scheduling schemes into a plurality of subsets, and updating the subsets according to policy thresholds of the subsets;
the step S2 specifically comprises the following steps:
s201, arranging m scheduling schemes in a descending order according to the size of the fitness, and dividing the m scheduling schemes into d subsets, wherein each subset comprises v scheduling schemes, the m=d×v is met, the specific dividing process is that the scheduling scheme with the sequence number of a+ (r-1) d enters an a-th subset, and a=1, 2, …, d and r are positive integers;
s202, updating the subset according to the strategy threshold value of the subset, and judging whether the subset meets f (T) b )-f(T w )>f(T) A Wherein f (T) b ) For the optimal fitness of the subset, f (T w ) For the worst fitness of the subset, f (T) A For the average fitness of the subset, the worst solution within the subset is updated if and only if satisfied using the following equation,
Figure FDA0004105536550000011
T n1 =T w +d s1 (2)
two solutions of the sub-differences within the subset are also updated with the following formula,
Figure FDA0004105536550000012
T n1,n2 =T w2,w3 +d s2 (4) Wherein d s1 And d s2 Representing the movement step size, rand () represents taking [0,1 ]]A random number between the two random numbers,
Figure FDA0004105536550000013
and->
Figure FDA0004105536550000014
Optimal solutions for the p-th subset and the p+1th subset, respectively, T w Is the worst solution for the p-th subset and is T after updating n1 Substitution, T w2,w3 Two solutions of the sub-difference for the p-th subset and are T-updated n1,n2 Substitution, T g The optimal solution of the scheduling scheme set S;
s3, remixing all the scheduling schemes into a new set, repeating the step S1, determining a current global optimal solution according to the adaptability of each scheduling scheme, stopping searching and outputting the current global optimal solution when the global searching termination condition is met, and otherwise, returning to the step S2.
2. The method for scheduling a Docker Swarm container according to claim 1, wherein the step S1 is further preceded by:
s0. initializing the set of scheduling schemes s= (T 1 ,T 2 ,…,T m ) Each scheduling scheme includes n tasks, i.e., T i =(I 1 ,I 2 ,…,I n )。
3. The Docker switch container scheduling method of claim 1, whereinIn that the calculation of the total resource load value of the node is
Figure FDA0004105536550000021
Omega in formula (II) i Weight of resource load, x i Is the load value of the resource at that node.
4. The method for scheduling a Docker switch container according to claim 1, wherein the scheduling scheme T i Is calculated as
Figure FDA0004105536550000022
Wherein N is the total number of nodes, L i For the total resource load value of the ith node, < >>
Figure FDA0004105536550000023
Is the average of the total resource load values of all nodes.
5. The Docker share container scheduling method according to claim 1, wherein step S202 further comprises:
when the subset does not satisfy f (T b )-f(T w )>f(T) A Randomly selecting two subsets, performing cross mutation on the optimal solutions of the two subsets by using the following formula,
Figure FDA0004105536550000024
Figure FDA0004105536550000025
wherein rand () represents taking [0,1 ]]A random number between the two random numbers,
Figure FDA0004105536550000026
is the optimal solution of subset x and is +.>
Figure FDA0004105536550000027
Substitution (S)>
Figure FDA0004105536550000028
Is the optimal solution for subset y and is +.>
Figure FDA0004105536550000029
And (3) substitution.
6. The method of claim 1, wherein the global search termination condition is that a maximum number of iterations k is reached.
7. A scheduling system for implementing the Docker Swarm container scheduling method according to any one of claims 1-6, comprising a container node, a fitness calculation unit, an update unit and a global search unit,
the fitness calculating unit is configured to calculate a scheduling scheme set s= (T) according to a total resource load value of each container node 1 ,T 2 ,…,T m ) Each scheduling scheme T in (1) i I=1, 2, …, m, resources including CPU, memory, I/O load, network bandwidth and disk space;
the updating unit is used for executing the step S2 according to any one of claims 1 to 6;
and the global searching unit is used for determining and outputting a global optimal solution according to the adaptability of each scheduling scheme.
CN202010295066.3A 2020-04-15 2020-04-15 Docker Swarm container scheduling method and system Active CN111506394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010295066.3A CN111506394B (en) 2020-04-15 2020-04-15 Docker Swarm container scheduling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010295066.3A CN111506394B (en) 2020-04-15 2020-04-15 Docker Swarm container scheduling method and system

Publications (2)

Publication Number Publication Date
CN111506394A CN111506394A (en) 2020-08-07
CN111506394B true CN111506394B (en) 2023-05-05

Family

ID=71867317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010295066.3A Active CN111506394B (en) 2020-04-15 2020-04-15 Docker Swarm container scheduling method and system

Country Status (1)

Country Link
CN (1) CN111506394B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117972069B (en) * 2024-04-01 2024-05-28 南京信人智能科技有限公司 Method for carrying out active dialogue and knowledge base vector search based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227599A (en) * 2016-07-20 2016-12-14 广东工业大学 The method and system of scheduling of resource in a kind of cloud computing system
CN107045455A (en) * 2017-06-19 2017-08-15 华中科技大学 A kind of Docker Swarm cluster resource method for optimizing scheduling based on load estimation
CN108762923A (en) * 2018-04-11 2018-11-06 哈尔滨工程大学 Method using differential evolution algorithm as Docker Swarm scheduling strategies
CN110058924A (en) * 2019-04-23 2019-07-26 东华大学 A kind of container dispatching method of multiple-objection optimization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227599A (en) * 2016-07-20 2016-12-14 广东工业大学 The method and system of scheduling of resource in a kind of cloud computing system
CN107045455A (en) * 2017-06-19 2017-08-15 华中科技大学 A kind of Docker Swarm cluster resource method for optimizing scheduling based on load estimation
CN108762923A (en) * 2018-04-11 2018-11-06 哈尔滨工程大学 Method using differential evolution algorithm as Docker Swarm scheduling strategies
CN110058924A (en) * 2019-04-23 2019-07-26 东华大学 A kind of container dispatching method of multiple-objection optimization

Also Published As

Publication number Publication date
CN111506394A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN110489229B (en) Multi-target task scheduling method and system
CN110413389B (en) Task scheduling optimization method under resource imbalance Spark environment
CN109962969A (en) The method and apparatus of adaptive cache load balancing for cloud computing storage system
WO2010024027A1 (en) Virtual server system and physical server selection method
CN107992353B (en) Container dynamic migration method and system based on minimum migration volume
CN112087509B (en) Task migration method in edge computing platform
EP1564638A1 (en) A method of reassigning objects to processing units
BRPI0014350B1 (en) workload management in a computing environment
Rajabzadeh et al. Energy-aware framework with Markov chain-based parallel simulated annealing algorithm for dynamic management of virtual machines in cloud data centers
CN108255427B (en) Data storage and dynamic migration method and device
CN111176784B (en) Virtual machine integration method based on extreme learning machine and ant colony system
CN105744006A (en) Particle swarm optimization user request dispatching method facing multi-type service
CN113822456A (en) Service combination optimization deployment method based on deep reinforcement learning in cloud and mist mixed environment
CN113590307B (en) Edge computing node optimal configuration method and device and cloud computing center
CN111147604A (en) Load balancing method for edge calculation of Internet of vehicles
CN111506394B (en) Docker Swarm container scheduling method and system
CN111756654A (en) Large-scale virtual network resource allocation method based on reliability
CN116302389A (en) Task scheduling method based on improved ant colony algorithm
Zhou et al. JPAS: Job-progress-aware flow scheduling for deep learning clusters
CN108650191B (en) Decision method for mapping strategy in virtual network
Ackermann et al. Distributed algorithms for QoS load balancing
CN110175172A (en) Very big two points of groups parallel enumerating method based on sparse bipartite graph
Yuan et al. A DRL-Based Container Placement Scheme with Auxiliary Tasks.
CN113285832B (en) NSGA-II-based power multi-mode network resource optimization allocation method
CN116048773A (en) Distributed collaborative task assignment method and system based on wave function collapse

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant