CN116244085A - Kubernetes cluster container group scheduling method, device and medium - Google Patents
Kubernetes cluster container group scheduling method, device and medium Download PDFInfo
- Publication number
- CN116244085A CN116244085A CN202310497173.8A CN202310497173A CN116244085A CN 116244085 A CN116244085 A CN 116244085A CN 202310497173 A CN202310497173 A CN 202310497173A CN 116244085 A CN116244085 A CN 116244085A
- Authority
- CN
- China
- Prior art keywords
- node
- disk
- container group
- busyness
- scheduling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention relates to a method, a device and a medium for scheduling a Kubernetes cluster container group, wherein the method comprises the following steps: judging whether the IO busyness of the disk is used as a scheduling condition; if yes, obtaining disk index information of each node, and calculating the disk IO busyness of the node; grading each node based on the IO busyness of the disk, and giving corresponding node grading values; determining a target node of the container group scheduling according to the scoring value of the node; the container group is scheduled to the target node to perform the creation action and started. The invention expands the dispatching based on the disk characteristic based on the original dispatching strategy of the Kubernetes, takes the IO busyness of the disk as the condition of the dispatching of the container, realizes the controllable dispatching of the container group under the Kubernetes cluster, is beneficial to dispatching the container group to the optimal node for processing, keeps the Kubernetes cluster in the optimal state, and ensures that the resource utilization is more reasonable.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a method, a device and a medium for scheduling a Kubernetes cluster container group.
Background
With the advent of micro-services and container (container) centric cloud native (cloud native landscape) technology, more and more fields began to migrate to container technology. With rapid development over several years, kuberne tes (simply "k8 s") have become a de facto standard in the art of container orchestration.
Current Kubernetes Scheduler has a rich scheduling policy, which evaluates the states of the nodes of containers to be published to a cluster through a scheduler service to pick out the nodes meeting the container operation, and further publishes the containers to a designated node, so that the scheduling requirement of a common container group (Pod) can be basically met. However, in the scenario that the Pod needs to store a volume (volume), only a small part of the demand attribute of the volume is taken into consideration by scheduling, and the condition of disk IO is not considered, so that the situations that some special volumes are needed, such as database services Mysql and Redis, which are extremely sensitive to the read-write performance of the disk, cannot be well satisfied, and when the Pod runs in a kubernetes environment, if the container is scheduled to a node where the disk IO is very busy, the database service cannot respond to the request in time, thereby affecting normal business.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the technical defect existing in the prior art, and provide a method, a device and a medium for dispatching a Kubernetes cluster container group, which expand the dispatching based on the disk characteristics on the basis of the original dispatching strategy of the Kubernetes, and take the IO busyness of the disk as the condition of the container dispatching, so that the nodes of the Kubernetes cluster can achieve the optimal and controllable service, realize the controllable dispatching of the container group under the Kubernetes cluster, facilitate the dispatching of the container group to the optimal nodes for processing, keep the Kubernetes cluster in the optimal state, and enable the resource utilization to be more reasonable.
In order to solve the technical problems, the invention provides a Kubernetes cluster container group scheduling method, which comprises the following steps:
when a scheduling request of a container group to be processed is received, judging whether the IO busyness of the disk is used as a scheduling condition;
if not, scheduling the container group to be processed according to the original scheduling strategy of the Kubernetes;
if yes, obtaining disk index information of each node in the Kubernetes cluster, and calculating the disk IO busyness of each node based on the disk index information;
grading each node based on the IO busyness of the disk, and giving a grading value to the corresponding node according to the grading result;
under the condition that node resource information and node storage resource information are both satisfied, determining a target node of the scheduling of the container group to be processed according to a scoring value of the node;
and dispatching the to-be-processed container group to the target node to execute a creation action and starting.
In one embodiment of the present invention, obtaining disk index information of each node in the Kubernetes cluster includes:
and obtaining disk index information of all nodes through a metrics interface, wherein the disk index information comprises the total number of storage volume read bytes, the time spent for storing the volume read bytes, the total number of storage volume write bytes and the time spent for storing the volume write bytes.
In one embodiment of the present invention, calculating the disk IO busyness of each node based on the disk index information includes:
the formula for calculating the disk IO busyness of each node is as follows:wherein (1)>Representing the disk IO busyness of the node, +.>Representing all storesSum of volume read byte count->Representing the sum of time taken to read bytes for all storage volumes, +.>Representing the sum of the total number of write sections of all storage volumes, +.>Representing the sum of the time spent for all storage volume sections. In one embodiment of the present invention, grading each node based on disk IO busyness comprises:
dividing the node level into three levels of busy, medium and idle;
and dividing the disk IO busyness of each node into corresponding grades according to the calculated disk IO busyness of each node.
In one embodiment of the present invention, assigning a score value to a corresponding node according to a ranking result includes:
when the to-be-processed container group requests to be scheduled to the idle nodes, assigning grading values to the busy, medium and idle nodes in sequence in an ascending manner; when the to-be-processed container group request is dispatched to the busy node, the nodes which are busy, medium and idle are sequentially assigned with the grading values in a descending order.
In addition, the invention also provides a scheduling device of the Kubernetes cluster container group, which comprises a Scheduler;
when a scheduling request of a container group to be processed is received, the Scheduler judges whether the IO busyness of the disk is used as a scheduling condition;
if not, scheduling the container group to be processed according to the original scheduling strategy of the Kubernetes;
if yes, the Scheduler acquires disk index information of each node in the Kubernetes cluster, and calculates the disk IO busyness of each node based on the disk index information; grading each node based on the IO busyness of the disk, and giving a grading value to the corresponding node according to the grading result; under the condition that node resource information and node storage resource information are both satisfied, determining a target node of the scheduling of the container group to be processed according to a scoring value of the node; and dispatching the to-be-processed container group to the target node to execute a creation action and starting.
In one embodiment of the present invention, the Scheduler is configured to obtain disk index information of all nodes through metrics interfaces, where the disk index information includes a total number of storage volume read bytes, and a total number of storage volume write bytes.
In one embodiment of the present invention, the Scheduler is configured to calculate a disk IO busyness of each node based on the disk index information, where a formula for calculating the disk IO busyness of each node is:wherein, the liquid crystal display device comprises a liquid crystal display device,representing the disk IO busyness of the node, +.>Representing the sum of the total number of read bytes of all storage volumes, +.>Representing the sum of time taken to read bytes for all storage volumes, +.>Representing the sum of the total number of write sections of all storage volumes, +.>Representing the sum of the time spent for all storage volume sections. In one embodiment of the present invention, the Scheduler is configured to categorize node levels into three levels of busy, medium and idle;
the Scheduler is further configured to divide the disk IO busyness of each node into corresponding levels according to the calculated disk IO busyness of each node.
Also, the present invention provides a computer-readable storage medium having stored thereon a computer program characterized in that: which when executed by a processor, implements the steps of the method described above.
Compared with the prior art, the technical scheme of the invention has the following advantages:
the method expands the dispatching based on the disk characteristics on the basis of the original dispatching strategy of the Kubernetes, takes the busyness of the disk IO as the dispatching condition of the container, can dispatch the container to the node which is not busyness of the disk IO through the dispatching strategy, and certainly has the service insensitive to the disk IO, so that the type of service can be dispatched to the node which is busyness of the disk IO through the dispatching strategy, so as to give out the node for the subsequent service requiring the disk IO, thereby the node of the Kubernetes cluster can achieve the optimal and controllable service of the container group under the Kubernetes cluster, the container group can be dispatched to the optimal node for processing, the Kubernetes cluster is kept in the optimal state, and the resource utilization is more reasonable.
In addition, the invention also provides a corresponding implementation device and a computer readable storage medium aiming at the Kubernetes cluster container group scheduling method, so that the method has more practicability, and the device and the computer readable storage medium have corresponding advantages.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof that are illustrated in the appended drawings.
Fig. 1 is a schematic flow chart of a Kubernetes cluster container group scheduling method according to the present invention.
Fig. 2 is a schematic diagram of a multi-terminal interaction scenario based on a Kubernetes cluster container group scheduling method.
Fig. 3 is a block diagram of a Kubernetes cluster container group scheduling device according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of this application and in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may include other steps or elements not expressly listed.
First, the terms of art that need to appear below are explained as follows:
kubernetes: it refers to the container orchestration tool, kubernetes, is responsible for managing and scheduling container services.
Pod: its Chinese paraphrasing is a group of containers, which is the smallest unit of Kubernetes management, multiple containers being combined together called Pod.
Disk IO: which refers to the speed at which data writing and data reading is performed to the disk on the server.
PVC: its english is known as persistence volume claims, where the text is defined as a storage resource request, which creates a storage volume in Kubernetes.
Carina: which is an item that provides storage volumes for Pod in kubernetes clusters.
Deployment: where the text is interpreted as a Deployment, in Kubernetes, one depoyment represents a set of Pod.
Scheduler: the definition of the text is a Scheduler, and the Scheduler is a component of kubernetes and is mainly responsible for scheduling Pod.
Metrics: the system is an index interface and the Metrics interface is used for acquiring disk index information of the node.
Volume: the Chinese definition is a storage volume, and the storage volume container is stateless in the server, and the storage volume needs to be mounted to save data to a local disk.
Node: where text is defined as a node.
ServiceMonitor: where text paraphrasing is an index collection service, serviceMonitor is a custom resource in Kubernetes that defines which services are to be collected for index information.
Referring to fig. 1, fig. 1 is a flow chart of a Kubernetes cluster container group scheduling method provided by an embodiment of the present invention, where the embodiment of the present invention may include the following contents:
s10: receiving a scheduling request of a container group to be processed;
s20: judging whether the IO busyness of the disk is used as a scheduling condition;
s21: if not, scheduling the container group to be processed according to the original scheduling strategy of the Kubernetes;
s221: if yes, obtaining disk index information of each node in the Kubernetes cluster;
s222: calculating the disk IO busyness of each node based on the disk index information;
s223: classifying each node based on the IO busyness of the disk;
s224: assigning a grading value to the corresponding node according to the grading result;
s225: under the condition that node resource information and node storage resource information are both satisfied, determining a target node of the scheduling of the container group to be processed according to a scoring value of the node;
s226: and dispatching the to-be-processed container group to the target node to execute a creation action and starting.
In S10, the container group (Pod) of the present invention refers to Pod resources running on each node in the Kubernetes cluster, and the container group to be processed is a container group that currently needs to be created on an appropriate node according to a scheduling policy by the Kubernetes cluster. A storage resource request PVC may be created by a client in the Kubernetes, the PVC being configured to use an item Carina driver in the Kubernetes that provides storage volumes for a group of containers; creating a Deployment of the deeployment in the Kubernetes, and configuring the deeployment to use the PVC, adding the following to the annotation of deeployment: storage. Io/disk: idle, wherein the first half represents the packet to which the key belongs, disk represents a specific key value, which is set to indicate scheduling requirements, and idle represents selecting a disk free node. When the depoyment creates into the Kubernetes cluster, a Pod is generated, and the Carina Scheduler discovers the newly created Pod and its required PVC information.
In S221, when the disc IO busyness is required to be used as a scheduling condition, the Carina Scheduler may obtain disc index information of all nodes through metrics interfaces, where the disc index information includes a total number of storage volume read bytes, and a total number of storage volume write bytes.
In S222, a formula for calculating the disk IO busyness of each node is as follows:wherein, the liquid crystal display device comprises a liquid crystal display device,representing the disk IO busyness of the node, +.>Representing the sum of the total number of read bytes of all storage volumes, +.>Representing the sum of time taken to read bytes for all storage volumes, +.>Representing the sum of the total number of write sections of all storage volumes, +.>Representing the sum of the time spent for all storage volume sections. It will be appreciated that ∈>Representing the total number of storage volume 1 read bytes + storage volume 2 read bytes +..total number of storage volume n read bytes; similarly, let go of>Representing storage volume 1 reading bytes spending time + storage volume 2 readingByte time +. Storage volume n takes time to read bytes; />Representing the total number of storage volume 1 knots + the total number of storage volume 2 knots +. The total number of storage volume n knots; />Representing storage volume 1 spends time writing bytes + storage volume 2 spends time writing bytes +.
It will be appreciated that the result of the formula calculation is a real-time result, and when the value is taken, the result is added to the history value in five minutes and divided by the history value to obtain the average state of the disk IO in the past five minutes. Servers were partitioned into busy, medium, idle levels after comparing historical values of disk IOs for these several servers for five minutes, noting that this planning was partitioned according to server rank, and there was no fixed threshold design.
In S223, the invention classifies the disk IO busyness of each node in the Kubernetes by using the Carina Scheduler, and may divide the node level into three levels of busyness, medium and idle (idle), and according to the calculated disk IO busyness of each node, the level busyness means that the node disk has frequent reading and writing, and the status of the node disk is relatively busyness; the level idle means that the node disk has almost no read-write, and the state is idle; the level medium means that the node disk is between busy and idle. When the container group to be processed is scheduled to the node with the busy disk IO, the database service cannot respond to the request in time, so that the container group is required to be scheduled to the optimal node for processing according to the service type, the Kubernetes cluster is kept in the optimal state, and the resource utilization is more reasonable.
In S224, the Carina Scheduler may assign a score to the corresponding node according to the grading result, and when the container group to be processed requests to be scheduled to the idle node, assign scores to the busy, medium and idle nodes in an ascending manner sequentially; when the to-be-processed container group request is dispatched to the busy node, the nodes which are busy, medium and idle are sequentially assigned with the grading values in a descending order. Illustratively, in the Pod of the database Mysql, which requests a node to disk idle, the server belonging to idle may score 6-9 points, medium 3-6 points, busy 1-3 points, with a higher score representing a greater chance that the Pod is scheduled to the node.
It will be appreciated that the above exemplary scores are for illustrative purposes only and that the scores are not fixed and may be set according to actual scheduling needs.
In S225, the Carina Scheduler may determine the target node of the scheduling of the container group to be processed according to the score value of the node, where a higher score value of the node indicates a greater chance that the Pod is scheduled to the node.
In S226, the present invention determines that the node created by the container group to be processed may be referred to as a target node according to the scheduling policy, that is, determines the target node in the Kubernetes cluster through the scheduling policy, then creates the container group to be processed on the target node, and the target node runs the Pod resource of the container group and processes related services.
The method expands the dispatching based on the disk characteristics on the basis of the original dispatching strategy of the Kubernetes, takes the busyness of the disk IO as the dispatching condition of the container, can dispatch the container to the node which is not busyness of the disk IO through the dispatching strategy, and certainly has the service insensitive to the disk IO, so that the type of service can be dispatched to the node which is busyness of the disk IO through the dispatching strategy, so as to give out the node for the subsequent service requiring the disk IO, thereby the node of the Kubernetes cluster can achieve the optimal and controllable service of the container group under the Kubernetes cluster, the container group can be dispatched to the optimal node for processing, the Kubernetes cluster is kept in the optimal state, and the resource utilization is more reasonable.
It should be noted that, in the present application, the steps may be executed simultaneously or in a certain preset order as long as the steps conform to the logic order, and fig. 1 is only a schematic manner and does not represent only such an execution order.
The embodiment of the invention also provides a specific implementation manner for the multi-terminal interaction scene, referring to fig. 2, the Carina Node is responsible for collecting Node storage volume information and collecting index information of each storage volume, including total capacity, usage amount, number of storage volumes and the like of the whole storage volume group, the current read byte size, the write byte size and the time spent in reading and writing of each storage volume, and the data are exposed through a metrics interface provided by the Carina Node. The method comprises the steps that whether disk index information of each node is to be collected or not is judged by the Carina Scheduler through whether a disk IO scheduling characteristic is started, when the characteristic is started, the Carina Scheduler collects the disk index information according to timing time in a ServiceMonitor, containers to collect indexes, collection interface addresses and other information, after the information is collected, a cache is built in the Carina Scheduler, the busyness of the disk IO of the node is judged according to the disk index information collected for multiple times, and the busyness of the disk IO is ordered according to the busyness of the disk IO, so that when the container scheduling is carried out, the busyness of the disk IO is used as a container scheduling strategy.
The embodiment of the invention also provides a corresponding device for the Kubernetes cluster container group scheduling method, so that the method is more practical. Wherein the device may be described separately from the functional module and the hardware. The following describes a Kubernetes cluster container group scheduling device provided by the embodiment of the present invention, and the Kubernetes cluster container group scheduling device described below and the Kubernetes cluster container group scheduling method described above may be referred to correspondingly.
Referring to fig. 3, the embodiment of the invention further provides a Kubernetes cluster container group scheduling device, which comprises a Scheduler Carina Scheduler; when a scheduling request of a container group to be processed is received, the Carina Scheduler judges whether the IO busyness of a disk is used as a scheduling condition; if not, scheduling the container group to be processed according to the original scheduling strategy of the Kubernetes; if yes, the Carina Scheduler acquires disk index information of each node in the Kubernetes cluster, and calculates the disk IO busyness of each node based on the disk index information; grading each node based on the IO busyness of the disk, and giving a grading value to the corresponding node according to the grading result; under the condition that node resource information and node storage resource information are both satisfied, determining a target node of the scheduling of the container group to be processed according to a scoring value of the node; and dispatching the to-be-processed container group to the target node to execute a creation action and starting.
Specifically, a storage resource request PVC may be created by a client in the Kubernetes, the PVC being configured to use an item Carina driver in the Kubernetes that provides storage volumes for a group of containers; creating a Deployment of the deeployment in the Kubernetes, and configuring the deeployment to use the PVC, adding the following to the annotation of deeployment: storage. Io/disk: idle, wherein the first half represents the packet to which the key belongs, disk represents a specific key value, which is set to indicate scheduling requirements, and idle represents selecting a disk free node. When the depoyment creates into the Kubernetes cluster, a Pod is generated, and the Carina Scheduler discovers the newly created Pod and its required PVC information.
And then, the Carina Scheduler is used for acquiring disk index information of all nodes through a metrics interface, calculating the disk IO busyness of each node according to the disk index information, and grading each node based on the disk IO busyness, wherein the disk index information comprises the total number of storage volume read bytes, the time spent for storing the volume read bytes, the total number of storage volume write bytes and the time spent for storing the volume write bytes. The formula for calculating the disk IO busyness of each node is as follows:wherein (1)>Representing the disk IO busyness of the node, +.>Representing the sum of the total number of read bytes of all storage volumes, +.>Representing the sum of time taken to read bytes for all storage volumes, +.>Representing the sum of the total number of write sections of all storage volumes, +.>Representing the sum of the time spent for all storage volume sections. It will be appreciated that ∈>Representing the total number of storage volume 1 read bytes + storage volume 2 read bytes +..total number of storage volume n read bytes; similarly, let go of>Representing the time spent by storage volume 1 to read bytes + the time spent by storage volume 2 to read bytes +. The time spent by storage volume n to read bytes; />Representing the total number of storage volume 1 knots + the total number of storage volume 2 knots +. The total number of storage volume n knots; />Representing storage volume 1 spends time writing bytes + storage volume 2 spends time writing bytes +. It will be appreciated that the result of the formula calculation is a real-time result, and when the value is taken, the result is added to the history value in five minutes and divided by the history value to obtain the average state of the disk IO in the past five minutes. Servers were partitioned into busy, medium, idle levels after comparing historical values of disk IOs for these several servers for five minutes, noting that this planning was partitioned according to server rank, and there was no fixed threshold design.
Specifically, the Carina Scheduler is further configured to assign a score value to the corresponding node according to the grading result. Illustratively, in the Pod of the database Mysql, which requests a node to disk idle, the server belonging to idle may score 6-9 points, medium 3-6 points, busy 1-3 points, with a higher score representing a greater chance that the Pod is scheduled to the node.
It will be appreciated that the above exemplary scores are for illustrative purposes only and that the scores are not fixed and may be set according to actual scheduling needs.
Furthermore, the embodiment of the invention also provides a Kubernetes cluster container group scheduling device, which is described from the perspective of hardware. The apparatus includes a memory for storing a computer program; a processor for implementing the steps of the Kubernetes cluster container group scheduling method as mentioned in the above embodiments when executing a computer program.
In an embodiment of the present invention, the processor may be a central processing unit (Central Processing Unit, CPU), an asic, a dsp, a field programmable gate array, or other programmable logic device, etc.
The processor may call a program stored in the memory, and in particular, the processor may perform operations in an embodiment of the Kubernetes cluster container group scheduling method.
The memory is used to store one or more programs, which may include program code including computer operating instructions.
In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a computer program, and the computer program realizes the steps of the Kubernetes cluster container group scheduling method when being executed by a processor.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. And obvious variations or modifications thereof are contemplated as falling within the scope of the present invention.
Claims (8)
1. A Kubernetes cluster container group scheduling method is characterized in that: comprising the following steps:
when a scheduling request of a container group to be processed is received, judging whether the IO busyness of the disk is used as a scheduling condition;
if not, scheduling the container group to be processed according to the original scheduling strategy of the Kubernetes;
if yes, obtaining disk index information of each node in the Kubernetes cluster, and calculating the disk IO busyness of each node based on the disk index information;
grading each node based on the IO busyness of the disk, and giving a grading value to the corresponding node according to the grading result;
under the condition that node resource information and node storage resource information are both satisfied, determining a target node of the scheduling of the container group to be processed according to a scoring value of the node;
dispatching the to-be-processed container group to the target node to execute a creation action and starting the creation action;
the disk index information comprises total number of storage volume read bytes, time spent by the storage volume read bytes, total number of storage volume write bytes and time spent by the storage volume write bytes; the formula for calculating the disk IO busyness of each node is as follows:
wherein (1)>Representing the disk IO busyness of the node, +.>Representing the sum of the total number of read bytes of all storage volumes, +.>Representing the sum of time taken to read bytes for all storage volumes, +.>Representing the sum of the total number of write sections of all storage volumes, +.>Representing the sum of the time spent for all storage volume sections.
2. The Kubernetes cluster container group scheduling method of claim 1, wherein the method comprises the steps of: the method for acquiring the disk index information of each node in the Kubernetes cluster comprises the following steps:
and obtaining the disk index information of all the nodes through the metrics interface.
3. A Kubernetes cluster container group scheduling method according to claim 1 or 2, wherein: each node is classified based on disk IO busyness, including:
dividing the node level into three levels of busy, medium and idle;
and dividing the disk IO busyness of each node into corresponding grades according to the calculated disk IO busyness of each node.
4. A Kubernetes cluster container group scheduling method according to claim 3, wherein: assigning a score value to the corresponding node according to the grading result, including:
when the to-be-processed container group requests to be scheduled to the idle nodes, assigning grading values to the busy, medium and idle nodes in sequence in an ascending manner; when the to-be-processed container group request is dispatched to the busy node, the nodes which are busy, medium and idle are sequentially assigned with the grading values in a descending order.
5. The utility model provides a Kubernetes cluster container group dispatch device which characterized in that: including Scheduler;
when a scheduling request of a container group to be processed is received, the Scheduler judges whether the IO busyness of the disk is used as a scheduling condition;
if not, scheduling the container group to be processed according to the original scheduling strategy of the Kubernetes;
if yes, the Scheduler acquires disk index information of each node in the Kubernetes cluster, and calculates the disk IO busyness of each node based on the disk index information; grading each node based on the IO busyness of the disk, and giving a grading value to the corresponding node according to the grading result; under the condition that node resource information and node storage resource information are both satisfied, determining a target node of the scheduling of the container group to be processed according to a scoring value of the node; dispatching the to-be-processed container group to the target node to execute a creation action and starting the creation action;
the disk index information comprises total number of storage volume read bytes, time spent by the storage volume read bytes, total number of storage volume write bytes and time spent by the storage volume write bytes; the formula for calculating the disk IO busyness of each node is as follows:
wherein (1)>Representing the disk IO busyness of the node, +.>Representing the sum of the total number of read bytes of all storage volumes, +.>Representing the sum of time taken to read bytes for all storage volumes, +.>Representing all storage volumesSum of total number of writing sections->Representing the sum of the time spent for all storage volume sections.
6. The Kubernetes cluster container group scheduler of claim 5, wherein: the Scheduler is used for acquiring disk index information of all nodes through a metrics interface.
7. A Kubernetes cluster container group scheduler according to claim 5 or 6, wherein: the Scheduler is used for classifying the node grades into three grades of busy, medium and idle;
the Scheduler is further configured to divide the disk IO busyness of each node into corresponding levels according to the calculated disk IO busyness of each node.
8. A computer-readable storage medium having stored thereon a computer program, characterized by: which when executed by a processor carries out the steps of the method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310497173.8A CN116244085A (en) | 2023-05-05 | 2023-05-05 | Kubernetes cluster container group scheduling method, device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310497173.8A CN116244085A (en) | 2023-05-05 | 2023-05-05 | Kubernetes cluster container group scheduling method, device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116244085A true CN116244085A (en) | 2023-06-09 |
Family
ID=86631627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310497173.8A Pending CN116244085A (en) | 2023-05-05 | 2023-05-05 | Kubernetes cluster container group scheduling method, device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116244085A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116860527A (en) * | 2023-07-10 | 2023-10-10 | 江苏博云科技股份有限公司 | Migration method for container using local storage in Kubernetes environment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112015536A (en) * | 2020-08-28 | 2020-12-01 | 北京浪潮数据技术有限公司 | Kubernetes cluster container group scheduling method, device and medium |
CN112527454A (en) * | 2020-12-04 | 2021-03-19 | 上海连尚网络科技有限公司 | Container group scheduling method and device, electronic equipment and computer readable medium |
CN114090176A (en) * | 2021-11-19 | 2022-02-25 | 苏州博纳讯动软件有限公司 | Kubernetes-based container scheduling method |
CN114661448A (en) * | 2022-05-17 | 2022-06-24 | 中电云数智科技有限公司 | Kubernetes resource scheduling method and scheduling component |
CN115297112A (en) * | 2022-07-31 | 2022-11-04 | 南京匡吉信息科技有限公司 | Dynamic resource quota and scheduling component based on Kubernetes |
-
2023
- 2023-05-05 CN CN202310497173.8A patent/CN116244085A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112015536A (en) * | 2020-08-28 | 2020-12-01 | 北京浪潮数据技术有限公司 | Kubernetes cluster container group scheduling method, device and medium |
CN112527454A (en) * | 2020-12-04 | 2021-03-19 | 上海连尚网络科技有限公司 | Container group scheduling method and device, electronic equipment and computer readable medium |
CN114090176A (en) * | 2021-11-19 | 2022-02-25 | 苏州博纳讯动软件有限公司 | Kubernetes-based container scheduling method |
CN114661448A (en) * | 2022-05-17 | 2022-06-24 | 中电云数智科技有限公司 | Kubernetes resource scheduling method and scheduling component |
CN115297112A (en) * | 2022-07-31 | 2022-11-04 | 南京匡吉信息科技有限公司 | Dynamic resource quota and scheduling component based on Kubernetes |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116860527A (en) * | 2023-07-10 | 2023-10-10 | 江苏博云科技股份有限公司 | Migration method for container using local storage in Kubernetes environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107807796B (en) | Data layering method, terminal and system based on super-fusion storage system | |
US9250965B2 (en) | Resource allocation for migration within a multi-tiered system | |
CN111045795A (en) | Resource scheduling method and device | |
US10356150B1 (en) | Automated repartitioning of streaming data | |
EP2975515A1 (en) | System and method for managing excessive distribution of memory | |
CN107515784B (en) | Method and equipment for calculating resources in distributed system | |
CN110413412B (en) | GPU (graphics processing Unit) cluster resource allocation method and device | |
CN102473134A (en) | Management server, management method, and management program for virtual hard disk | |
CN111464583B (en) | Computing resource allocation method, device, server and storage medium | |
CN107273200B (en) | Task scheduling method for heterogeneous storage | |
CN111381928B (en) | Virtual machine migration method, cloud computing management platform and storage medium | |
US10983873B1 (en) | Prioritizing electronic backup | |
CN116244085A (en) | Kubernetes cluster container group scheduling method, device and medium | |
US11914894B2 (en) | Using scheduling tags in host compute commands to manage host compute task execution by a storage device in a storage system | |
CN110737717B (en) | Database migration method and device | |
CN112799597A (en) | Hierarchical storage fault-tolerant method for stream data processing | |
CN112000460A (en) | Service capacity expansion method based on improved Bayesian algorithm and related equipment | |
CN112416520B (en) | Intelligent resource scheduling method based on vSphere | |
CN116389591A (en) | Cross-domain-based distributed processing system and scheduling optimization method | |
CN115993932A (en) | Data processing method, device, storage medium and electronic equipment | |
Yang et al. | Improving f2fs performance in mobile devices with adaptive reserved space based on traceback | |
CN109324886A (en) | cluster resource scheduling method and device | |
CN107229519B (en) | Task scheduling method and device | |
CN109828718B (en) | Disk storage load balancing method and device | |
CN112612606A (en) | Message theme processing method and device, computer equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |