CN114942830A - Container scheduling method, container scheduling device, storage medium, and electronic apparatus - Google Patents

Container scheduling method, container scheduling device, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN114942830A
CN114942830A CN202210771267.5A CN202210771267A CN114942830A CN 114942830 A CN114942830 A CN 114942830A CN 202210771267 A CN202210771267 A CN 202210771267A CN 114942830 A CN114942830 A CN 114942830A
Authority
CN
China
Prior art keywords
scheduled
node
container group
container
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210771267.5A
Other languages
Chinese (zh)
Inventor
朱万意
王钤
师春雨
朱元瑞
李德恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202210771267.5A priority Critical patent/CN114942830A/en
Publication of CN114942830A publication Critical patent/CN114942830A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure provides a container scheduling method, a container scheduling device, a storage medium and electronic equipment, and relates to the technical field of communication. The container scheduling method comprises the steps of firstly, obtaining a container group list to be scheduled of a target cluster, wherein the container group list to be scheduled comprises one or more container groups to be scheduled, the target cluster comprises a plurality of nodes, and the nodes are used for operating the container groups of the target cluster; secondly, determining the dominant resource occupation ratio of each node corresponding to the container group to be scheduled and the resource waste degree of each node by the container group to be scheduled; finally, determining a target node from the nodes based on the dominant resource ratio and the resource waste degree, and scheduling the container group to be scheduled to the target node; therefore, the to-be-scheduled container group is scheduled according to the dominant resource proportion of each node corresponding to the to-be-scheduled container group and the resource waste degree of each node of the to-be-scheduled container group, so that the resource allocation of the nodes of the target cluster is more balanced.

Description

Container scheduling method, container scheduling device, storage medium, and electronic apparatus
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a container scheduling method and apparatus, a storage medium, and an electronic device.
Background
Network element deployment gradually evolves from a Virtual Machine (VM) modality to a container modality, and starts to be deployed on existing node resources in the form of Pod (one or more containers encapsulating applications). In the related art, in the case of multiple resources and multiple users, it is difficult to efficiently schedule a container, resulting in resource waste.
Disclosure of Invention
The disclosure provides a container scheduling method, a container scheduling device, a storage medium and an electronic device, which are used for solving the problem of resource waste in related technical problems to a certain extent.
In a first aspect, an embodiment of the present disclosure provides a container scheduling method, including:
acquiring a to-be-scheduled container group list of a target cluster, wherein the to-be-scheduled container group list comprises one or more to-be-scheduled container groups, the target cluster comprises a plurality of nodes, and the nodes are used for operating the container groups of the target cluster;
determining the dominant resource occupation ratio of each node corresponding to the container group to be scheduled and the resource waste degree of each node by the container group to be scheduled;
and determining a target node from the nodes based on the dominant resource ratio and the resource waste degree, and scheduling the container group to be scheduled to the target node.
In an alternative embodiment of the present disclosure, a node provides at least two resources; the determining the resource waste degree of the container group to be scheduled to each node comprises the following steps:
acquiring historical load data of the container group to be scheduled;
determining the utilization rate of various resources on each node by the container group to be scheduled based on the historical load data of the container group to be scheduled;
and determining the resource waste degree of the container group to be scheduled to each node based on the utilization rate of the container group to be scheduled to each resource on each node.
In an optional embodiment of the present disclosure, determining, based on the historical load data of the to-be-scheduled container group, a usage rate of each resource on each node by the to-be-scheduled container group includes:
determining the demand of various resources of the container group to be scheduled based on the historical load data of the container group to be scheduled;
and determining the ratio of the required quantity of various resources of the container group to be scheduled to the total quantity of various resources provided by the node as the utilization rate of various resources on each node by the container group to be scheduled.
In an optional embodiment of the present disclosure, determining, based on the usage rate of the to-be-scheduled container group for various resources on each node, a resource waste degree of the to-be-scheduled container group for each node includes:
summing the utilization rates of various resources on the nodes by the container group to be scheduled;
determining the ratio of the summation result to the resource variety number provided by the node as the average utilization rate of the container group to be scheduled to the total resource of the node;
and determining the resource waste degree of the to-be-scheduled container group to each node based on the utilization rate of various resources provided by the to-be-scheduled container group to the node and the average utilization rate of the to-be-scheduled container group to the total resources of the node.
In an optional embodiment of the present disclosure, determining a target node based on the dominant resource ratio and the resource waste degree, and scheduling the group of containers to be scheduled to the target node includes:
selecting a container group to be scheduled with the minimum dominant resource proportion from the container group list to be scheduled, and determining a corresponding node as a preselected node;
under the condition that allocable resources exist in the dominant resources of the preselected node, determining whether the resource waste degree of the preselected node by the container group to be scheduled meets a preset scheduling condition;
and under the condition that the resource waste degree of the to-be-scheduled container group to the preselected node meets the preset scheduling condition, taking the preselected node as a target node, and scheduling the to-be-scheduled container group to the target node.
In an optional embodiment of the present disclosure, the container scheduling method further includes:
under the condition that the application program of the target cluster meets a preset adjusting condition, adjusting the copies of the container group according to the optimal number of the copies; wherein the optimal number of copies is determined from historical load data.
In an alternative embodiment of the present disclosure, the optimal number of copies is determined by the following steps:
acquiring historical load data of each container group in a target cluster; wherein the historical load data comprises load data of at least two resources;
determining the optimal number of copies of the container group for running the application program based on the historical load data to serve as the optimal number of copies corresponding to the application program
In a second aspect, an embodiment of the present disclosure provides a container scheduling apparatus, including:
the list acquisition module is used for acquiring a to-be-scheduled container group list of the target cluster; the list of the container groups to be scheduled comprises one or more container groups to be scheduled, and the target cluster comprises a plurality of nodes, wherein the nodes are used for operating the container groups of the target cluster.
A resource waste degree determining module, configured to determine a dominant resource proportion of each node corresponding to the to-be-scheduled container group and a resource waste degree of each node of the to-be-scheduled container group;
and the container scheduling module is used for determining a target node from the nodes based on the dominant resource occupation ratio and the resource waste degree, and scheduling the container group to be scheduled to the target node.
In a third aspect, an embodiment of the present disclosure provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the above container scheduling method.
In a fourth aspect, an embodiment of the present disclosure provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described container scheduling method via execution of executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
the method for scheduling the container comprises the steps of firstly, obtaining a container group list to be scheduled of a target cluster, wherein the container group list to be scheduled comprises one or more container groups to be scheduled, the target cluster comprises a plurality of nodes, and the nodes are used for operating the container groups of the target cluster; secondly, determining the dominant resource proportion of each node corresponding to the container group to be scheduled and the resource waste degree of each node by the container group to be scheduled; finally, determining a target node from the nodes based on the dominant resource ratio and the resource waste degree, and scheduling the container group to be scheduled to the target node; thus, 1) the present disclosure considers various resources comprehensively, and determines the leading resource according to the resource proportion, so that the leading resources corresponding to different nodes may be different for different to-be-scheduled container groups, and even if the leading resources corresponding to different nodes may be different for the same to-be-scheduled container group, the problem of resource waste caused by resource allocation based on a single resource in the related art is overcome to a certain extent; 2) the method includes the steps that the dominant resource occupation ratio of each node corresponding to a to-be-scheduled container group and the resource waste degree of each node of the to-be-scheduled container group are combined, and the to-be-scheduled container group is scheduled; and the resource scheduling is not only carried out depending on the dominant resource occupation ratio, so that the resource allocation of the nodes of the target cluster is more balanced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a network architecture of a container scheduling method in an application scenario in the present exemplary embodiment;
FIG. 2 illustrates a flow chart of a method of container scheduling in the exemplary embodiment;
fig. 3 is a flowchart illustrating a method for scheduling a container according to the present exemplary embodiment;
fig. 4 is a flowchart illustrating a method for determining a resource waste level in a container scheduling method according to the exemplary embodiment;
fig. 5 is a flowchart illustrating a method for determining a resource waste level in a container scheduling method according to the present exemplary embodiment;
fig. 6 is a flowchart illustrating a method for scheduling a group of containers to be scheduled in a container scheduling method according to the present exemplary embodiment;
FIG. 7 is a flow chart illustrating adjusting container group replicas in a container scheduling method in the exemplary embodiment;
fig. 8 is a schematic structural diagram showing a container scheduling apparatus in the present exemplary embodiment;
fig. 9 shows a schematic structural diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. The exemplary embodiments, however, may be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Network element deployment gradually evolves from a Virtual Machine (VM) form to a container form, and starts to be deployed on existing node resources in the form of Pod (one or more containers encapsulating applications), and scheduling of Pod inside a kubernets cluster is performed based on a series of default scheduling policies of Kube-Scheduler, wherein the scheduling policies adopt the simplest 1:1 and balanced mode for the weight of using resources, which easily causes resource waste.
Secondly, after a data center bearing a cloud network system bears a network element by a Pod, when a multi-copy network element of the Pod is realized, in order to solve the above problem, a user does not accurately deploy service for the first time or set minReplicas (minimum copy number) and maxrplicas (maximum copy number) according to experience. When the cluster is in low load, even if the number of the running application copies is minReplicas, the utilization rate of the cloud network resources is still low.
In summary, in the related art, under the condition of multiple resources and multiple users, the resource allocation is usually performed based on a single resource, and there is a difference in demand for the resource by the user, and the resource allocation method developed by the single resource does not consider the difference in demand for various resources by the user, which easily causes a bad state that some resource of the node is used too much and other kinds of resource are used too little, and causes waste of the node resource.
In view of the foregoing problems, the embodiments of the present disclosure provide a container scheduling method, and the following briefly introduces an application environment of the container scheduling method provided by the embodiments of the present disclosure:
referring to fig. 1, a system architecture 100 in an application scenario of the container scheduling method according to the embodiment of the present disclosure includes at least: master101, Node102, resource scheduling unit 103 and Etcd 104.
Wherein, Master101 is a control plane of the cluster and is responsible for the decision (management) of the cluster; master101 includes: API Server 1011: the system is a unique entrance of resource operation and is used for receiving commands input by users and providing mechanisms such as authentication, authorization, API registration and discovery; scheduler 1012: the Node is responsible for cluster resource scheduling, and the Pod is scheduled to the Node nodes responding according to a preset scheduling strategy; ControllerManager 1013: and the system is responsible for maintaining the state of the cluster, such as program deployment arrangement, fault detection, automatic expansion, rolling update and the like.
Node102 includes: kubelet 1021: the system is responsible for maintaining the life cycle of the container, namely, the container is created, updated and destroyed by controlling Docker; KubeProxy 1022: the system is responsible for providing service discovery and load balancing inside the cluster; pod 1023: kubernets' minimum control Unit, containers are all operating in a Pod, and a Pod may include one or more containers.
The resource scheduling unit 103 includes: resource balance scheduling module 1031: the method is used for expanding the scheduling method, and binding the Pod to the node with the highest balance score after the balance score of each node is calculated according to the algorithm; the resource scheduling module 1032: the resource scheduling module is used for processing the historical load data of the resources through the resource scheduling model to obtain the optimal number of copies, and the optimal number of copies is provided for the expansion module to serve as a configuration parameter; monitoring module 1033: the method is used for monitoring various resources in the container cluster, regularly collecting the load data of the Pod, and storing the load data in the database.
Etcd 104: and is responsible for storing information of various resource objects in the cluster. Node102 is the data plane of the cluster and is responsible for providing the operating environment for the container.
The following exemplifies that the resource scheduling unit 103 is used as an execution subject, and the container scheduling method is applied to the resource balancing scheduling module 1031. Referring to fig. 2, a container scheduling method provided in the embodiment of the present disclosure includes the following steps 201 to 203:
step 201: and acquiring a list of the container groups to be scheduled of the target cluster.
The list of the container groups to be scheduled comprises one or more container groups to be scheduled, the target cluster comprises a plurality of nodes, and the nodes are used for operating the container groups of the target cluster.
The target cluster is a container cluster, such as: kubernets cluster, etc.; here, the target cluster mainly includes two parts: a Master Node (Master Node) and a group of Node nodes (computing nodes); each Node can run one or more Pod, and one Pod can run one or more containers encapsulating the application program, so that one Pod can run one application program and can also run a plurality of application programs; of course, one application may be run by multiple pods.
The list of the container groups to be scheduled comprises one or more container groups to be scheduled; the container group is a basic unit for implementing service functions, and may be Pod in Kubernetes, for example. The container group represents an independent application program running instance and consists of one or more containers; in some embodiments, the group of containers to be scheduled may be understood as Pod to be scheduled; this is because Pod is the smallest unit of control for a target cluster (e.g., a Kubernetes cluster). In some embodiments, the scheduling of Pod within the kubernets cluster is expanded based on a series of default scheduling policies of Kube-Scheduler, and the container group list to be scheduled may be understood as a list generated when Pod is scheduled based on the series of default scheduling policies of Kube-Scheduler.
Step 202: and determining the dominant resource occupation ratio of each node corresponding to the container group to be scheduled and the resource waste degree of each node by the container group to be scheduled.
The dominant resource ratio of each node corresponding to the to-be-scheduled container group refers to the ratio of various resource demands of the to-be-scheduled container group to various resource supply amounts of the node, and if the ratio of the resource is higher than the ratio of the various resource demands of the to-be-scheduled container group, the resource is determined as a dominant resource, and the ratio of the resource is the dominant resource ratio.
The dominant resource occupation ratio is different due to the difference between the demand of the group of containers to be scheduled and the supply of various resources by the nodes, such as: the method comprises the steps that a container group list to be scheduled comprises a container group a to be scheduled and a container group B to be scheduled, a target cluster comprises a node A and a node B, resources comprise two resources of a CPU and a memory, the demand of the container group a to be scheduled to the two resources is (1CPU, 4GB), the demand of the container group B to be scheduled to the two resources is (3CPU, 8GB), the supply (allocable amount) of the node A to the two resources is (9CPU, 18GB), and the supply (allocable amount) of the node B to the two resources is (7CPU, 10 GB); then, the two resource occupation ratios of the group a of containers to be scheduled to the node a are (1/9, 4/18), and the two resource occupation ratios to the node B are (1/7, 4/10); the two resource occupation ratios of the container group B to be scheduled to the node A are (3/9, 8/18), and the two resource occupation ratios to the node B are (3/7, 8/10); therefore, the dominant resource of the corresponding node a of the container group a to be scheduled is a memory (because 4/18 is greater than 1/9), and the dominant resource ratio is 4/18, the dominant resource of the corresponding node B is a memory (because 4/10 is greater than 1/7), and the dominant resource ratio is 4/10; the dominant resource of the corresponding node a of the container group B to be scheduled is the memory (because 8/18 is larger than 3/9) and the dominant resource ratio is 8/18, the dominant resource of the corresponding node B is the memory (because 8/10 is larger than 3/7) and the dominant resource ratio is 8/10.
The resource waste degree can be represented by a resource waste index, and the higher the resource waste index is, the higher the resource waste degree is; the lower the resource waste index is, the lower the resource waste degree is. The resource waste index can be determined according to the utilization rate of various resources on the node; therefore, the resource waste index (resource waste degree) can indicate the balance degree of various resource allocations on the node; the higher the resource waste index is, the more balanced the distribution of various resources on the node is; the lower the resource waste index, the more unbalanced the allocation of various resources on the node, and in extreme cases, the barrel effect may occur.
Since the list of the container groups to be scheduled includes one or more container groups to be scheduled, the step 202 determines the dominant resource occupation ratio of each node corresponding to all the container groups to be scheduled included in the list of the container groups to be scheduled and the resource waste degree of each node by all the container groups to be scheduled.
Step 203: and determining a target node from each node based on the dominant resource proportion and the resource waste degree, and scheduling the container group to be scheduled to the target node.
The list of the container groups to be scheduled comprises one or more container groups to be scheduled, wherein when the list of the container groups to be scheduled comprises a plurality of container groups to be scheduled, the container group to be scheduled with the minimum dominant resource ratio is preferentially allocated; such as: the minimum dominant resource ratio is 4/18, and 4/18 is the dominant resource ratio of the node a corresponding to the container group a to be scheduled, so the container group a to be scheduled is scheduled preferentially, and the corresponding node a is taken as the preselected node.
The resource waste degree is used for evaluating the balance degree of the resource allocation of the target node, and the higher the resource waste degree is, the lower the balance degree of the resource allocation is; the lower the resource waste degree is, the higher the balance degree of resource distribution is; in some embodiments, a threshold may be set according to historical experience, when the resource waste degree of the preselected node is less than or equal to the threshold, the preselected node is determined as the target node, and the group of containers to be scheduled is scheduled to the target node.
The container scheduling method provided by the embodiment of the disclosure includes the steps of firstly, obtaining a to-be-scheduled container group list of a target cluster, wherein the to-be-scheduled container group list includes one or more to-be-scheduled container groups, the target cluster includes a plurality of nodes, and the nodes are used for operating the container group of the target cluster; secondly, determining the dominant resource occupation ratio of each node corresponding to the container group to be scheduled and the resource waste degree of each node of the container group to be scheduled; finally, determining a target node from each node based on the dominant resource ratio and the resource waste degree, and scheduling the container group to be scheduled to the target node; therefore, the to-be-scheduled container group is scheduled by combining the dominant resource occupation ratio of each node corresponding to the to-be-scheduled container group and the resource waste degree of each node of the to-be-scheduled container group; therefore, the resource allocation of the nodes of the target cluster is more balanced.
Referring to fig. 3, in an alternative embodiment of the present disclosure, a node provides at least two resources, and the step 202 determines the resource waste degree of each node by a to-be-scheduled container group, including the following steps 301 to 303:
step 301: and acquiring historical load data of the container group to be scheduled.
The historical load data may be load data in any historical period of time. The present disclosure is not limited to the specific category of the load data, and may include, for example: CPU usage data (e.g., CPU occupancy), memory usage data (e.g., memory occupancy), disk usage data (e.g., disk occupancy), network bandwidth usage data (e.g., bandwidth usage).
The historical load data can be obtained by periodically obtaining monitoring point data; such as: for each Pod, monitoring point data is acquired every minute, so that load data of each Pod is obtained.
Step 302: and determining the utilization rate of various resources on each node by the container group to be scheduled based on the historical load data of the container group to be scheduled.
The method comprises the following steps that the demand of a container group to be scheduled on various resources can be determined according to historical load data of the container group to be scheduled; in some embodiments, the daily load peak may be determined according to an average daily load peak, or may be determined according to a maximum daily load peak, which is not limited herein.
The utilization rate of various resources on the node can be determined by the demand of the container group to be scheduled to the various resources and the total amount of the various resources provided by the node.
Step 303: and determining the resource waste degree of the to-be-scheduled container group to each node based on the utilization rate of the to-be-scheduled container group to each resource on each node.
The resource utilization rate is in inverse proportion to the resource waste degree; the higher the resource utilization rate is, the lower the resource waste degree is; the lower the resource utilization, the higher the resource waste.
Based on the method of FIG. 3, the resource waste degree of each node by the container group to be scheduled can be determined based on the historical load data; the historical load data of the to-be-scheduled container group reflects the actual working condition of the to-be-scheduled container group, the resource waste degree of each node of the to-be-scheduled container group is calculated based on the historical load data, the calculation result of the resource waste degree can be adapted to the actual situation, and therefore the method is more accurate.
Referring to fig. 4, in an optional embodiment of the present disclosure, the step 302 of determining the usage rate of each resource on each node by the group of containers to be scheduled based on the historical load data of the group of containers to be scheduled includes the following steps 401 and 402:
step 401: and determining the demand of various resources of the container group to be scheduled based on the historical load data of the container group to be scheduled.
The required amount of various resources of the container group to be scheduled, for example: with a resources, then there are a demands. In some embodiments, the demand for various resources of the group of containers to be scheduled may be determined by the average daily load peak of the historical load data; here, the average daily load peak value may be obtained by averaging the daily load peak values; the daily load peak may directly adopt the highest data in the daily load data, or may remove the highest X data in the daily load data, and then use the higher data in the rest data as the daily load peak, which is not limited herein.
Step 402: and determining the ratio of the required quantity of various resources of the container group to be scheduled to the total quantity of various resources provided by the nodes as the utilization rate of various resources on each node by the container group to be scheduled.
Wherein, the total amount of various resources provided by the node, such as: the node provides a kinds of resources, and then there are a total resources.
After determining the required quantity of each resource of the container group to be scheduled in step 401, according to the resource and the node, determining the ratio of the required quantity of the resource on the node to the required quantity of the resource of the container group to be scheduled to the total quantity of the resource provided by the node as the container group to be scheduledThe utilization rate of the resource on the node by the degree container group; such as: there are a kinds of resources, b nodes, the demand of j resource of the container group to be scheduled (i node) is g ij The total amount of j resources provided by the ith node is h ij Then, the usage rate of the jth resource on the ith node is
Figure BDA0003724076920000101
Such as: the demand of the to-be-scheduled container group on the resource j (for example, a CPU) on the node i is 3, and the total amount of the resource j provided by the node i is 9, then, the usage rate of the to-be-scheduled container group on the resource j on the node i is 3/9-1/3; according to the above process, the utilization rate of the container group to be scheduled to various resources on each node can be obtained.
Based on the method of fig. 4, the utilization rate of various resources on each node by the group of containers to be scheduled can be determined based on historical load data; the utilization rate of various resources on each node is determined by the container group to be scheduled based on the historical load data, so that the result is more accurate; and then, the accurate resource waste degree of the container group to be scheduled to each node can be obtained.
Referring to fig. 5, in an optional embodiment of the present disclosure, the step 303 determines a resource waste degree of each node by a to-be-scheduled container group based on a usage rate of each resource on each node by the to-be-scheduled container group, and includes the following steps 501 to 503:
step 501: and adding the utilization rates of various resources on the nodes to the group of containers to be scheduled.
For node i, the above steps can be formulated
Figure BDA0003724076920000111
Represents; that is, the utilization rates of each resource on the node i are added; where a represents the number of resource classes.
Step 502: and determining the ratio of the summation result to the resource variety number provided by the node as the average utilization rate of the total resources of the node by the container group to be scheduled.
For node i, the average usage of the total resources of the node can be calculated by using the following formula (1):
Figure BDA0003724076920000112
where j denotes the j-th resource.
Step 503: and determining the resource waste degree of the to-be-scheduled container group to each node based on the utilization rate of various resources provided by the to-be-scheduled container group to the node and the average utilization rate of the to-be-scheduled container group to the total resources of the node.
The resource waste degree can be determined by a resource waste index; wherein the resource waste index w can be calculated by the following formula (2):
Figure BDA0003724076920000113
where b represents the number of nodes of the target cluster.
That is, first, an error between the usage rate of various resources on a node and the average usage rate of the total resources on the node (i.e., an error between the usage rate of various resources on the node and the average usage rate of the total resources on the node) is determined based on the usage rate of various resources provided by a group of containers to be scheduled to the node and the average usage rate of the total resources on the node by the group of containers to be scheduled
Figure BDA0003724076920000121
) (ii) a Secondly, determining a resource waste index (namely w) of the group of containers to be scheduled to the nodes based on the error and the number of the nodes in the target cluster; and finally, determining the resource waste degree of the nodes based on the resource waste index to obtain the resource waste degree of the to-be-scheduled container group to each node (namely, determining the resource waste degree of each node according to the process, and further obtaining the resource waste degree of the to-be-scheduled container group to each node).
The resource waste degree is in direct proportion to the resource waste index and represents the balance degree of resource allocation; the larger the resource waste index is, the larger the resource waste degree is, and the more unbalanced the resource distribution is represented; the smaller the resource waste index is, the smaller the resource waste degree is, and the more balanced the resource distribution is represented.
Based on the method of fig. 5, the resource waste degree of each node by the group of containers to be scheduled can be determined to guide the scheduling process, so that the resource allocation on the nodes after scheduling is more balanced.
Referring to fig. 6, in an optional embodiment of the present disclosure, the step 203 determines a target node among the nodes based on the dominant resource ratio and the resource waste degree, and schedules the to-be-scheduled container group to the target node, including the following steps 601 to 603:
step 601: and selecting the container group to be scheduled with the minimum dominant resource occupation ratio from the container group list to be scheduled, and determining the corresponding node as a preselected node.
Before step 601, the container groups to be scheduled in the container group list to be scheduled may be sorted according to the dominant resource ratio, and then the container group to be scheduled with the smallest dominant resource ratio is selected from the sorted container groups; such as: dominant resource occupation ratio D of the No. 1 node corresponding to the No. S Pod to be scheduled s1 And if the minimum value is reached, taking the S-th Pod to be scheduled as a container group to be scheduled, and taking the 1 st node as a pre-selection node. In some embodiments, when two or more to-be-scheduled container groups with the smallest dominant resource proportion exist, a second to-be-scheduled container group with the smallest dominant resource proportion is selected.
Step 602: and under the condition that the distributable resources exist in the leading resources of the preselected node, determining whether the resource waste degree of the preselected node by the container group to be scheduled meets the preset scheduling condition.
Whether the dominant resource can be allocated or not can be determined by the total resource amount of the dominant resource and the allocated resource amount of the dominant resource, that is, the difference between the total resource amount of the dominant resource and the allocated resource amount of the dominant resource (the remaining resource amount of the dominant resource), and if the difference is greater than 0, it is indicated that the dominant resource has the allocable resource; if the difference is less than or equal to 0, it indicates that the dominant resource does not have an allocable resource.
The preset scheduling condition may be set by setting a preset threshold according to experience, and determining a size relationship between the resource waste degree and the preset threshold as the preset scheduling condition.
Step 603: and under the condition that the resource waste degree of the pre-selection nodes by the container group to be scheduled meets the preset scheduling conditions, taking the pre-selection nodes as target nodes, and scheduling the container group to be scheduled to the target nodes.
Since the resource waste degree is inversely proportional to the balance degree of resource allocation, when the resource waste degree of the preselected node is smaller than the preset threshold, the preselected node is considered to satisfy the preset scheduling condition (indicating that the resource allocation is more balanced); and when the resource waste degree of the preselected node is greater than the preset threshold value, the preselected node is judged not to meet the preset scheduling condition. In some embodiments, when the degree of resource waste of the preselected node does not satisfy the scheduling condition, the step 202 may be returned to perform scheduling with the second dominant resource.
Based on the method of fig. 6, each to-be-scheduled container group in the to-be-scheduled container group list can be scheduled to a corresponding target node, so that resource allocation of the target cluster node is more balanced.
Furthermore, after the above method, the following steps may be further included:
and judging whether the copy number of the container group to be scheduled on the target node reaches the optimal copy number.
And deleting the container group to be scheduled from the container group list to be scheduled under the condition that the copy number of the container group to be scheduled reaches the optimal copy number.
The optimal copy number can be determined based on historical load data, or can be a numerical value set by default by a Deployment (a controller used for managing distribution in kubernets); generally, the numerical value set by default by the Deployment is set by the user based on experience.
In an optional embodiment of the present disclosure, the container scheduling method further includes the following steps:
and under the condition that the application program of the target cluster meets the preset adjusting condition, adjusting the copies of the container group according to the optimal copy number.
Wherein the optimal number of copies is determined from historical load data.
A container for encapsulating an application runs in a Pod, and generally, one Pod can run one application or multiple applications; of course, one application may be run by multiple pods.
Whether Pod or container, it is essential to serve the application. When the utilization rate of the application program to various resources is high, the capacity can be dynamically expanded to reduce the pressure of each Pod; when the utilization rate of various resources by the application program is low, the dynamic capacity reduction can be carried out to reduce the resource waste.
The scheduling method provided by the present disclosure mainly aims to reduce resource waste, and therefore, the preset adjustment condition can be understood as a condition for triggering a capacity reduction operation; such as: the application program is in a low load state, and the number of copies is greater than a preset value.
This step may be before step 201 or after step 203, and is not limited herein.
Referring to fig. 7, in an alternative embodiment of the present disclosure, the adjusting the copies of the container group according to the optimal number of copies in the case that the application program of the target cluster meets the preset adjustment condition includes the following steps 701 and 702:
step 701: and acquiring historical load data of each container group in the target cluster.
Wherein the historical load data comprises load data of at least two resources.
The historical load data includes: CPU data, memory data, Disk data, etc.
The historical load data may be obtained by a monitoring module, such as: for each monitoring point, load data is acquired based on a preset interval (e.g., one minute).
Step 702: and determining the optimal copy number of the container group for running the application program based on the historical load data to serve as the optimal copy number corresponding to the application program.
Wherein, the optimal number of copies can be calculated by the following formula (3):
Figure BDA0003724076920000141
wherein D/E/F is a preset value, D/E/F is more than 0 and less than 1, and Roundup represents rounding up;
Figure BDA0003724076920000142
m is the total number of Pod; p is a radical of pod By removing the highest X data points from the single Pod load data for each day, the higher of the remaining data points is taken as p pod
In some embodiments, the above steps 701 and 702 may also train the resource scheduling model to calculate the optimal number of copies, which is not limited herein.
Based on the method of FIG. 7, the optimal number of copies can be determined based on historical data, the determined optimal number of copies is more accurate, and the problem that the optimal number of copies determined by a user according to experience in the related art is inaccurate is solved.
Referring to fig. 8, in order to implement the above container scheduling method, an embodiment of the present disclosure provides a container scheduling apparatus 800, which is applied to the above system architecture 800. Fig. 8 shows a schematic architecture diagram of a container scheduling apparatus 800, the container scheduling apparatus 800 comprising: a list obtaining module 801, a resource waste degree determining module 802, and a container scheduling module 803, wherein:
a list obtaining module 801, configured to obtain a list of to-be-scheduled container groups of a target cluster;
the list of the container groups to be scheduled comprises one or more container groups to be scheduled, the target cluster comprises a plurality of nodes, and the nodes are used for operating the container groups of the target cluster.
A resource waste degree determining module 802, configured to determine a dominant resource proportion of each node corresponding to the to-be-scheduled container group and a resource waste degree of each node of the to-be-scheduled container group;
and the container scheduling module 803 is configured to determine a target node from the nodes based on the dominant resource proportion and the resource waste degree, and schedule the group of containers to be scheduled to the target node.
In an optional embodiment, the node provides at least two resources, and the resource waste level determining module 802 is configured to obtain historical load data of the to-be-scheduled container group; determining the utilization rate of various resources on each node by the container group to be scheduled based on the historical load data of the container group to be scheduled; and determining the resource waste degree of the to-be-scheduled container group to each node based on the utilization rate of the to-be-scheduled container group to each resource on each node.
In an optional embodiment, the resource waste degree determining module 802 is configured to determine the required amount of each resource of the container group to be scheduled based on the historical load data of the container group to be scheduled; and determining the ratio of the required quantity of various resources of the container group to be scheduled to the total quantity of various resources provided by the nodes as the utilization rate of various resources on each node by the container group to be scheduled.
In an optional embodiment, the resource waste degree determining module 802 is configured to sum usage rates of various resources on a node for a group of containers to be scheduled; determining the ratio of the summation result to the resource variety number provided by the node as the average utilization rate of the total resources of the node by the container group to be scheduled; and determining the resource waste degree of the to-be-scheduled container group to each node based on the utilization rate of various resources provided by the to-be-scheduled container group to the node and the average utilization rate of the to-be-scheduled container group to the total resources of the node.
In an optional embodiment, the container scheduling module 803 is configured to select a container group to be scheduled with the smallest dominant resource proportion from the container group list to be scheduled, and determine a corresponding node as a preselected node; under the condition that distributable resources exist in the leading resources of the preselected node, determining whether the resource waste degree of the preselected node by the container group to be scheduled meets a preset scheduling condition; and under the condition that the resource waste degree of the to-be-scheduled container group to the preselected node meets the preset scheduling condition, taking the preselected node as a target node, and scheduling the to-be-scheduled container group to the target node.
In an optional embodiment, the container scheduling apparatus 800 further includes a copy determining module, configured to adjust the copies of the container group according to the optimal number of copies when the application program of the target cluster meets a preset adjustment condition; wherein the optimal number of copies is determined from historical load data.
In an optional embodiment, the duplicate determination module is configured to obtain historical load data of each container group in the target cluster; wherein the historical load data at least comprises load data of two resources;
and determining the optimal number of copies of the container group for running the application program based on the historical load data to serve as the optimal number of copies corresponding to the application program.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which may be implemented in the form of a program product, including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device. In one embodiment, the program product may be embodied as a portable compact disc read only memory (CD-ROM) and include program code, and may be executed on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider). In the embodiments of the present disclosure, the program code stored in the computer readable storage medium may implement any of the steps of the above container scheduling method when executed.
Exemplary embodiments of the present disclosure also provide an electronic device. In general, the electronic device may include a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the above-described container scheduling method via execution of the executable instructions.
The following takes the electronic apparatus 900 in fig. 9 as an example, and the configuration of the electronic apparatus is exemplarily described.
As shown in fig. 9, the electronic device 900 is embodied in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to: at least one processing unit 910, at least one memory unit 920, and a bus 930 that couples various system components including the memory unit 920 and the processing unit 910.
Where the storage unit stores program code, which may be executed by the processing unit 910, to cause the processing unit 910 to perform the steps according to various exemplary embodiments of the present invention described in the above section "exemplary methods" of the present specification. For example, processing unit 910 may perform method steps, etc., as shown in fig. 2.
The storage unit 920 may include volatile memory units such as a random access memory unit (RAM)921 and/or a cache memory unit 922, and may further include a read only memory unit (ROM) 923.
Storage unit 920 may also include a program/utility 924 having a set (at least one) of program modules 925, such program modules 925 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The bus 930 may include a data bus, an address bus, and a control bus.
The electronic device 900 may also communicate with one or more external devices 1000 (e.g., keyboard, pointing device, bluetooth device, etc.), which may be through the I/O interface 940. The electronic device 900 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) through a network adapter 950. As shown, the network adapter 950 communicates with the other modules of the electronic device 900 over a bus 930. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 900, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In the embodiment of the present disclosure, the program code stored in the terminal device may implement any one of the steps in the container scheduling method as described above when executed.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (10)

1. A method for scheduling containers, comprising:
acquiring a to-be-scheduled container group list of a target cluster, wherein the to-be-scheduled container group list comprises one or more to-be-scheduled container groups, the target cluster comprises a plurality of nodes, and the nodes are used for operating the container groups of the target cluster;
determining the dominant resource occupation ratio of each node corresponding to the container group to be scheduled and the resource waste degree of each node by the container group to be scheduled;
and determining a target node from the nodes based on the dominant resource ratio and the resource waste degree, and scheduling the container group to be scheduled to the target node.
2. The method according to claim 1, wherein the node provides at least two resources; the determining of the resource waste degree of the to-be-scheduled container group to each node comprises the following steps:
acquiring historical load data of the container group to be scheduled;
determining the utilization rate of various resources on each node by the container group to be scheduled based on the historical load data of the container group to be scheduled;
and determining the resource waste degree of the container group to be scheduled to each node based on the utilization rate of the container group to be scheduled to each resource on each node.
3. The method according to claim 2, wherein the determining the usage rate of the group of containers to be scheduled for various resources on each node based on the historical load data of the group of containers to be scheduled comprises:
determining the demand of various resources of the container group to be scheduled based on the historical load data of the container group to be scheduled;
and determining the ratio of the required quantity of various resources of the container group to be scheduled to the total quantity of various resources provided by the node as the utilization rate of various resources on each node by the container group to be scheduled.
4. The method according to claim 2, wherein the determining the resource waste degree of the to-be-scheduled container group to each node based on the usage rate of the to-be-scheduled container group to each resource on each node comprises:
summing the utilization rates of various resources on the nodes by the container group to be scheduled;
determining the ratio of the summation result to the resource variety number provided by the node as the average utilization rate of the total resources of the node by the container group to be scheduled;
and determining the resource waste degree of the to-be-scheduled container group to each node based on the utilization rate of various resources provided by the to-be-scheduled container group to the node and the average utilization rate of the to-be-scheduled container group to the total resources of the node.
5. The method for scheduling containers according to claim 1, wherein the determining a target node based on the dominant resource ratio and the waste of resources and scheduling the group of containers to be scheduled to the target node comprises:
selecting a container group to be scheduled with the minimum dominant resource proportion from the container group list to be scheduled, and determining a corresponding node as a preselected node;
under the condition that distributable resources exist in the dominant resources of the preselected node, determining whether the resource waste degree of the preselected node by the container group to be scheduled meets a preset scheduling condition;
and under the condition that the resource waste degree of the to-be-scheduled container group to the preselected node meets the preset scheduling condition, taking the preselected node as a target node, and scheduling the to-be-scheduled container group to the target node.
6. The method of claim 1, further comprising:
under the condition that the application program of the target cluster meets a preset adjusting condition, adjusting the copies of the container group according to the optimal number of the copies; wherein the optimal number of copies is determined from historical load data.
7. The container scheduling method according to claim 6, wherein the optimal number of copies is determined by:
acquiring historical load data of each container group in a target cluster; wherein the historical load data comprises load data of at least two resources;
and determining the optimal number of copies of the container group for running the application program based on the historical load data to serve as the optimal number of copies corresponding to the application program.
8. A container scheduling apparatus, the apparatus comprising:
the list acquisition module is used for acquiring a to-be-scheduled container group list of the target cluster; the list of container groups to be scheduled comprises one or more container groups to be scheduled, the target cluster comprises a plurality of nodes, and the nodes are used for operating the container groups of the target cluster,
a resource waste degree determining module, configured to determine a dominant resource proportion of each node corresponding to the to-be-scheduled container group and a resource waste degree of each node of the to-be-scheduled container group;
and the container scheduling module is used for determining a target node from each node based on the dominant resource proportion and the resource waste degree and scheduling the container group to be scheduled to the target node.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-7 via execution of the executable instructions.
CN202210771267.5A 2022-06-30 2022-06-30 Container scheduling method, container scheduling device, storage medium, and electronic apparatus Pending CN114942830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210771267.5A CN114942830A (en) 2022-06-30 2022-06-30 Container scheduling method, container scheduling device, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210771267.5A CN114942830A (en) 2022-06-30 2022-06-30 Container scheduling method, container scheduling device, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN114942830A true CN114942830A (en) 2022-08-26

Family

ID=82910574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210771267.5A Pending CN114942830A (en) 2022-06-30 2022-06-30 Container scheduling method, container scheduling device, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN114942830A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941495A (en) * 2019-12-10 2020-03-31 广西大学 Container collaborative arrangement method based on graph coloring
CN111522667A (en) * 2020-04-27 2020-08-11 中国地质大学(武汉) Resource scheduling method based on mirror image existence mechanism scoring strategy in container cloud environment
WO2021063339A1 (en) * 2019-09-30 2021-04-08 星环信息科技(上海)股份有限公司 Cluster resource scheduling method, apparatus, device and storage medium
CN112988398A (en) * 2021-04-26 2021-06-18 北京邮电大学 Micro-service dynamic scaling and migration method and device
CN113342477A (en) * 2021-07-08 2021-09-03 河南星环众志信息科技有限公司 Container group deployment method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021063339A1 (en) * 2019-09-30 2021-04-08 星环信息科技(上海)股份有限公司 Cluster resource scheduling method, apparatus, device and storage medium
CN110941495A (en) * 2019-12-10 2020-03-31 广西大学 Container collaborative arrangement method based on graph coloring
CN111522667A (en) * 2020-04-27 2020-08-11 中国地质大学(武汉) Resource scheduling method based on mirror image existence mechanism scoring strategy in container cloud environment
CN112988398A (en) * 2021-04-26 2021-06-18 北京邮电大学 Micro-service dynamic scaling and migration method and device
CN113342477A (en) * 2021-07-08 2021-09-03 河南星环众志信息科技有限公司 Container group deployment method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110727512B (en) Cluster resource scheduling method, device, equipment and storage medium
CN110221915B (en) Node scheduling method and device
CN113867959A (en) Training task resource scheduling method, device, equipment and medium
CN110636388A (en) Service request distribution method, system, electronic equipment and storage medium
CN113110914A (en) Internet of things platform construction method based on micro-service architecture
Delavar et al. A synthetic heuristic algorithm for independent task scheduling in cloud systems
CN113467939A (en) Capacity management method, device, platform and storage medium
CN103248622B (en) A kind of Online Video QoS guarantee method of automatic telescopic and system
CN112214288B (en) Pod scheduling method, device, equipment and medium based on Kubernetes cluster
CN116467082A (en) Big data-based resource allocation method and system
US8819239B2 (en) Distributed resource management systems and methods for resource management thereof
CN111796933A (en) Resource scheduling method, device, storage medium and electronic equipment
CN114237894A (en) Container scheduling method, device, equipment and readable storage medium
CN112163734B (en) Cloud platform-based setting computing resource dynamic scheduling method and device
CN109783236A (en) Method and apparatus for output information
CN116647560A (en) Method, device, equipment and medium for coordinated optimization control of Internet of things computer clusters
CN114844791B (en) Cloud service automatic management and distribution method and system based on big data and storage medium
CN114942830A (en) Container scheduling method, container scheduling device, storage medium, and electronic apparatus
CN114090201A (en) Resource scheduling method, device, equipment and storage medium
CN114896070A (en) GPU resource allocation method for deep learning task
CN112291326B (en) Load balancing method, load balancing device, storage medium and electronic equipment
CN114253663A (en) Virtual machine resource scheduling method and device
CN111327663A (en) Bastion machine distribution method and equipment
CN117311999B (en) Resource scheduling method, storage medium and electronic equipment of service cluster
US20230354101A1 (en) Resource allocation device, resource allocation method, and control circuit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination