CN113010265A - Pod scheduling method, scheduler, memory plug-in and system - Google Patents

Pod scheduling method, scheduler, memory plug-in and system Download PDF

Info

Publication number
CN113010265A
CN113010265A CN202110282885.9A CN202110282885A CN113010265A CN 113010265 A CN113010265 A CN 113010265A CN 202110282885 A CN202110282885 A CN 202110282885A CN 113010265 A CN113010265 A CN 113010265A
Authority
CN
China
Prior art keywords
pod
node
local
scheduled
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110282885.9A
Other languages
Chinese (zh)
Inventor
冯逸航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCB Finetech Co Ltd filed Critical CCB Finetech Co Ltd
Priority to CN202110282885.9A priority Critical patent/CN113010265A/en
Publication of CN113010265A publication Critical patent/CN113010265A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Abstract

The application discloses a method, a scheduler, a storage plug-in and a system for scheduling a Pod, which relate to the technical field of computers and can realize the scheduling of the Pod to a node with sufficient storage resources of a local volume. The method comprises the following steps: acquiring local volume quota information of each node in a k8s platform; acquiring the occupation amount of local storage resources when the Pod to be scheduled runs; determining a target node from each node according to the local volume quota information and the occupation amount of local storage resources when the Pod to be scheduled runs; and dispatching the Pod to be dispatched to the target node.

Description

Pod scheduling method, scheduler, memory plug-in and system
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method, a scheduler, a memory plug-in and a system for scheduling Pod.
Background
With the containerization of stateful applications such as message middleware and databases, and running on the container scheduling platform (k8s, kubernets), it is necessary to ensure that critical data is not lost, and that these stateful applications can be recovered after an interruption state. The existing data storage mode generally comprises local storage and back-end storage, and because the local storage does not need to be called through a cross-host network, the local storage mode can be preferentially used for storing stateful applications such as message middleware and databases.
Currently, when scheduling a Pod (composed of one or more containers), the manner of providing a local volume for the Pod is generally a static provisioning manner. However, in the conventional static provisioning method, when the Pod is scheduled, the storage resources of the local volume of the node to which the Pod is scheduled may be insufficient, and the Pod may not be started normally.
Disclosure of Invention
The application provides a method, a scheduler, a storage plug-in and a system for scheduling Pod, which can enable the Pod to be scheduled to a node with sufficient storage resources of a local volume by analyzing local volume resources of each node in a k8s platform.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a method for scheduling Pod, which may be applied to a scheduler, and includes: acquiring local volume quota information of each node in a k8s platform; acquiring the occupation amount of local storage resources when the Pod to be scheduled runs; determining a target node from each node according to the local volume quota information and the occupation amount of local storage resources when the Pod to be scheduled runs; and dispatching the Pod to be dispatched to the target node.
Because the local volume quota information of each node can represent the residual condition of the local volume resources of each node, the target node with sufficient local volume residual resources can be determined according to the local volume quota information and the occupation amount of the local storage resources when the Pod to be scheduled runs. In this way, after the Pod to be scheduled is scheduled to the target node, the sufficient local resources of the target node can be used, so that the local storage requirements of stateful applications such as message middleware and databases can be met. It can be seen that, in the technical scheme of the application, by analyzing the local volume resources of each node in the k8s platform, the situation that the Pod cannot be started normally due to insufficient storage resources of the local volume of the node to which the Pod is scheduled can be avoided, so that dynamic supply of the local volume resources is realized.
Optionally, in a possible design manner, the determining a target node from each node according to the local volume quota information and the occupancy of the local storage resource when the Pod is to be scheduled to run may include: determining the total amount of local volume resources of each node in each node and the occupied amount of the local volume resources of each node from the local volume quota information; and determining a target node from each node according to the total amount of the local volume resources, the occupied amount of the local volume resources and the occupied amount of the local storage resources when the Pod to be scheduled runs.
Optionally, in another possible design manner, the "determining a target node from each node according to the total amount of local volume resources, the occupied amount of local volume resources, and the occupied amount of local storage resources when the Pod to be scheduled runs" may include: determining the difference value between the total amount of the local volume resources of the first node and the occupied amount of the local volume resources of the first node; the first node is any one of the nodes; if the difference is larger than or equal to the occupation amount of the local storage resources when the Pod to be scheduled runs, determining the first node as a preselected node; and randomly determining a target node from the preselected nodes.
Optionally, in another possible design manner, the "determining a target node from each node according to the local volume quota information and the occupancy of the local storage resource when the Pod runs to be scheduled" may further include: acquiring memory resource information and CPU resource information of each node; and determining a target node from each node according to the local volume quota information, the local storage resource occupation amount when the Pod to be scheduled runs, the memory resource information and the CPU resource information.
Optionally, in another possible design manner, the acquiring creation request information of a Pod to be scheduled may include: acquiring local storage resource occupation amount of each Pod in the container cluster during self operation from the storage plug-in; and selecting the local storage resource occupation amount of the Pod operation to be scheduled from the local storage resource occupation amount of the self operation determined by each Pod.
Optionally, in another possible design manner, the selecting the local storage resource occupancy amount of the Pod runtime to be scheduled from the local storage resource occupancy amounts of the respective Pod runtime determined by the Pod may include: and according to the running state of each Pod, selecting the local storage resource occupation amount when the Pod to be scheduled runs from the local storage resource occupation amount when the Pod to be scheduled runs.
Optionally, in another possible design manner, the method for scheduling Pod provided in the embodiment of the present application further includes: and after the target node is determined from each node, sending the address information of the target node to the storage plug-in.
In a second aspect, the present application provides a Pod scheduling method, which may be applied to a storage plugin, and includes: sending a first request to a container cluster where the Pod to be scheduled is located; receiving local storage resource occupation amount of each Pod when determining the operation of the Pod; and sending the local storage resource occupation amount of the self operation determined by each Pod to the scheduler. The first request comprises local volume quota information of each node in the Kubernetes platform, and the first request is used for indicating each Pod in the container cluster to determine the local storage resource occupancy amount when the Pod operates.
Optionally, in another possible design manner, the method for scheduling Pod provided by the present application further includes: receiving address information of a target node sent by a scheduler; and creating a target persistent volume PV at the target node according to the address information, and creating a local volume LV corresponding to the target PV at the target node.
Optionally, in another possible design manner, the "creating the target persistent volume PV at the target node according to the address information, and creating the local volume LV corresponding to the target PV at the target node" may include: and creating a target PV at the target node, and calling a logical volume management LVM mechanism to create an LV corresponding to the target PV at the target node.
Optionally, in another possible design, the invoking the logical volume management LVM mechanism to create the LV corresponding to the target PV at the target node may include: acquiring request information in PVC; the request information at least comprises a storage size and an access mode; and calling an LVM mechanism, and creating an LV corresponding to the target PV at the target node according to the request information.
Optionally, in another possible design, the storage plugin provided in this embodiment of the present application is a local volume storage plugin declared in a storage class created by the container cluster.
Optionally, in another possible design, the memory card provided in the embodiment of the present application is a CSI card.
In a third aspect, the present application provides a scheduler comprising: the system comprises an acquisition module, a determination module and a scheduling module;
the acquisition module is used for acquiring local volume quota information of each node in the Kubernetes platform;
the acquisition module is also used for acquiring the occupation amount of local storage resources when the Pod to be scheduled runs;
the determining module is used for determining a target node from each node according to the local volume quota information and the occupancy of local storage resources when the Pod to be scheduled runs;
and the scheduling module is used for scheduling the Pod to be scheduled to the target node.
Optionally, in another possible design manner, the determining module is specifically configured to:
determining the total amount of local volume resources of each node in each node and the occupied amount of the local volume resources of each node from the local volume quota information;
and determining a target node from each node according to the total amount of the local volume resources, the occupied amount of the local volume resources and the occupied amount of the local storage resources when the Pod to be scheduled runs.
Optionally, in another possible design manner, the determining module is specifically configured to:
determining the difference value between the total amount of the local volume resources of the first node and the occupied amount of the local volume resources of the first node; the first node is any one of the nodes;
if the difference is larger than or equal to the occupation amount of the local storage resources when the Pod to be scheduled runs, determining the first node as a preselected node;
and randomly determining a target node from the preselected nodes.
Optionally, in another possible design manner, the determining module is further specifically configured to:
acquiring memory resource information of each node and CPU resource information of a central processing unit;
and determining a target node from each node according to the local volume quota information, the local storage resource occupation amount when the Pod to be scheduled runs, the memory resource information and the CPU resource information.
Optionally, in another possible design manner, the determining module is further specifically configured to:
and according to the running state of each Pod, selecting the local storage resource occupation amount when the Pod to be scheduled runs from the local storage resource occupation amount when the Pod to be scheduled runs.
Optionally, in another possible design, the scheduler further includes a sending module, and the sending module is configured to: and sending the address information of the target node to the storage plug-in.
In a fourth aspect, the present application provides a storage card comprising: a transmitting module and a receiving module;
the sending module is used for sending a first request to a container cluster where the Pod to be scheduled is located; the first request comprises local volume quota information of each node in a Kubernetes platform, and the first request is used for indicating each Pod in the container cluster to determine the local storage resource occupancy amount when the Pod operates;
the receiving module is used for receiving the local storage resource occupation amount when each Pod determines the operation of the Pod;
and the sending module is used for sending the local storage resource occupation amount determined by each Pod during the self operation to the scheduler.
Optionally, in a possible design manner, the storage plug-in provided by the present application further includes a creation module;
the receiving module is also used for receiving the address information of the target node sent by the scheduler;
and the creating module is used for creating the target persistent volume PV at the target node according to the address information and creating the local volume LV corresponding to the target PV at the target node.
Optionally, in a possible design manner, the creating module is specifically configured to: acquiring request information in a persistent volume statement PVC; the request information at least comprises a storage size and an access mode;
and calling an LVM mechanism, and creating an LV corresponding to the target PV at the target node according to the request information.
Optionally, in one possible design, the storage plugin is a local volume storage plugin declared in a storage class created by the container cluster.
Optionally, in a possible design, the storage plug-in is a container storage interface CSI plug-in.
In a fifth aspect, the present application provides a scheduling system for Pod, comprising a container cluster, a scheduler, and a storage plugin.
For the descriptions of the second, third, fourth and fifth aspects in this application, reference may be made to the detailed description of the first aspect; in addition, for the beneficial effects described in the second aspect, the third aspect, the fourth aspect and the fifth aspect, reference may be made to beneficial effect analysis of the first aspect, and details are not repeated here.
In this application, each device or functional module in the scheduling system of Pod is not limited, and in actual implementation, these devices or functional modules may appear by other names. Insofar as the functions of the respective devices or functional modules are similar to those of the present application, they fall within the scope of the claims of the present application and their equivalents.
These and other aspects of the present application will be more readily apparent from the following description.
Drawings
Fig. 1 is a schematic architecture diagram of a Pod scheduling system according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for scheduling Pod according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another scheduling method for Pod according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another scheduling method for Pod according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another scheduling method for Pod according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another scheduling method for Pod according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a scheduler according to an embodiment of the present application;
FIG. 8 is a block diagram illustrating an architecture of a memory card according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a Pod scheduling apparatus according to an embodiment of the present application.
Detailed Description
The scheduling method, scheduler, memory card and system for Pod provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
With containerization of stateful applications such as message middleware (e.g., mysql, redis, kafka, etc.) and databases, and running at k8s, it is necessary to ensure that critical data is not lost while these stateful applications are recoverable after an interruption state.
The existing data storage mode can be divided into local storage and back-end storage according to the position of a server, and the storage performance of the local storage is superior to that of the back-end storage because the local storage does not need to be called through a cross-host network. Therefore, the performance index of stateful applications such as message middleware and a database can be improved by adopting local storage, and the local storage mode can be preferentially used for storing the stateful applications such as the message middleware and the database.
Currently, when scheduling Pod, the manner of providing local volume for Pod is generally a static provisioning manner. However, in the conventional static provisioning method, when the Pod is scheduled, the storage resources of the local volume of the node to which the Pod is scheduled may be insufficient, and the Pod may not be started normally. For example, in the native k8s, a hostPath local storage mode is provided, and the hostPath local storage mode is to directly store a file in a local corresponding directory, so that quota limitation cannot be performed, and a host disk is easily exploded.
In view of the problems in the prior art, embodiments of the present application provide a method, a scheduler, a storage plugin, and a system for scheduling Pod, where in the technical scheme, the scheduler may analyze local volume resources of each node in a k8s platform, and may avoid a situation that Pod cannot be started normally due to insufficient storage resources of local volumes of nodes to which the Pod is scheduled; in addition, the storage plug-in establishes the local volume with the appointed size at the dispatcher determined target node, and the dynamic supply of the local volume resource can be realized.
Because containerization of stateful applications such as message middleware and databases needs to ensure that key data is not lost, and the stateful applications can be restored after an interruption state, the life cycle of the local volume needs to be independent of the containers. In k8s, a Persistent Volume (PV) for storing data is provided that is independent of the container lifecycle, which PV may be implemented by a storage class (StorageClass) and corresponding storage plug-in. Some of the Storage plug-ins are provided by various cloud providers included in the k8s version, and may also be implemented by Storage vendors and developers according to the Container Storage Interface (CSI) standard. By implementing the storage plug-ins, the storage plug-ins are called by using StorageClass, so that corresponding PV can be created according to PVC, and dynamic supply of PV is realized. Therefore, in order to provide a local volume capable of persistent storage, in the scheduling method of Pod provided in the technical solution of the present application, a storage plug-in may be used to create a PV.
Fig. 1 illustrates a structure of a Pod scheduling system according to an embodiment of the present application. As shown in fig. 1, the scheduling system of Pod may include a scheduler 01, a storage plugin 02, and a container cluster 03.
The scheduler 01 is configured to obtain local volume quota information of each node in the k8s platform, determine a target node from each node according to the local volume quota information and the occupancy amount of local storage resources when the Pod to be scheduled runs, and then schedule the Pod to be scheduled to the target node.
And the storage plug-in 02 is configured to, after receiving the address information of the target node sent by the scheduler 01, create a target PV at the target node according to the address information, and create a local volume LV corresponding to the target PV at the target node.
The scheduler 01 is further configured to monitor an operation status of each Pod in the container cluster 03, so as to implement scheduling of each Pod.
The method for scheduling Pod provided by the present application is described below with reference to the scheduling system of Pod shown in fig. 1.
Referring to fig. 2, a method for scheduling Pod provided in the embodiment of the present application includes S201 to S204:
s201, the dispatcher acquires local volume quota information of each node in the k8S platform.
The local volume quota information is used for representing the local volume quota of each node and comprises information such as the total amount of local volume resources and occupied amount of the local volume resources.
S202, the scheduler acquires the occupation amount of the local storage resources when the Pod to be scheduled runs.
Optionally, in a possible implementation manner, the scheduler may obtain, from the storage plug-in, the occupation amount of the local storage resource in the runtime of each Pod in the container cluster; and then selecting the local storage resource occupation amount of the Pod operation to be scheduled from the local storage resource occupation amount of the self operation determined by each Pod.
Each Pod in the container cluster has a different life cycle, and the operation states of each Pod in different life cycles are different. For example, some of the pods may be in an un-created state, some of the pods may be in a created state but not yet bound to a node, some of the pods are already bound to a node and the pods already have a container running. Therefore, optionally, in a possible implementation manner, the scheduler may select, according to the running state of each Pod, the local storage resource occupancy amount when the Pod to be scheduled runs from the local storage resource occupancy amounts when the scheduler itself runs, which are determined by each Pod. Illustratively, the container cluster includes a first Pod, a second Pod, and a third Pod. The first Pod is in an un-created state, the second Pod is in a created state but not bound to a node, and the third Pod is in a state that is bound to a node and a container is already running in the Pod. The scheduler may determine the first Pod and the second Pod as pods to be scheduled, for example, the second Pod may be first determined as a Pod to be scheduled, the occupied amount of the local storage resource when the second Pod operates is determined from the Pod emulation of the second Pod, and then the target node is determined for the second Pod. And then, determining the first Pod as the Pod to be scheduled, determining the occupation amount of local storage resources when the first Pod operates from the Pod emulation of the first Pod, and then determining a target node for the first Pod.
S203, the dispatcher determines a target node from each node according to the local volume quota information and the occupation amount of local storage resources when the Pod to be dispatched runs.
The local volume quota information of each node can represent the local volume quota of each node, and comprises information such as the total amount of local volume resources and occupied amount of the local volume resources. Optionally, in a possible implementation manner, the scheduler may determine, from the local volume quota information, a total amount of local volume resources of each node in each node and an occupied amount of local volume resources of each node, and then determine, according to the total amount of local volume resources, the occupied amount of local volume resources, and an occupied amount of local storage resources when the Pod to be scheduled runs, a target node from each node.
For example, if the first node is any one of the nodes, the scheduler may determine a difference between a total amount of the local volume resources of the first node and an occupied amount of the local volume resources of the first node; if the difference is larger than or equal to the occupation amount of the local storage resources when the Pod to be scheduled runs, determining the first node as a preselected node; thereafter, the scheduler randomly determines a target node from the preselected nodes.
Optionally, in another possible implementation manner, the scheduler may determine a difference between the total amount of the local volume resources of each node and the occupied amount of the local volume resources in each node, and then determine, as the target node, the node with the largest difference between the total amount of the local volume resources and the occupied amount of the local volume resources in each node.
Optionally, in another possible implementation manner, the scheduler may also obtain memory resource information and CPU resource information of each node; and then determining a target node from each node according to the local volume quota information, the local storage resource occupation amount when the Pod to be scheduled runs, the memory resource information and the CPU resource information.
For example, the scheduler may first obtain memory resource information and CPU resource information of each node, then determine a node where the remaining amount of memory resources is greater than a first threshold and the remaining amount of CPU resources is greater than a second threshold as a preselected node, and then determine a target node from the preselected nodes according to the local volume quota information and the occupied amount of local storage resources when the Pod to be scheduled is running.
The first threshold and the second threshold may be artificially determined parameters in advance.
It can be understood that, the method for determining the target node from each node according to the local volume quota information and the local storage resource occupancy amount when the Pod to be scheduled runs is provided in the embodiment of the present application, which is only an example, in practical applications, the scheduler may further determine the target node from each node based on the local volume quota information and the local storage resource occupancy amount when the Pod to be scheduled runs in other manners, and the embodiment of the present application does not limit this.
And S204, the scheduler schedules the Pod to be scheduled to the target node.
After the Pod to be scheduled is scheduled to the target node by the scheduler, the Pod to be scheduled can use sufficient local resources of the target node, so that the local storage requirements of stateful applications such as message middleware and databases can be met.
In the Pod scheduling method provided by the embodiment of the application, because the local volume quota information of each node can represent the remaining situation of the local volume resources of each node, a target node with sufficient local volume remaining resources can be determined according to the local volume quota information and the occupied amount of the local storage resources when the Pod to be scheduled runs. In this way, after the Pod to be scheduled is scheduled to the target node, the sufficient local resources of the target node can be used, so that the local storage requirements of stateful applications such as message middleware and databases can be met. It can be seen that, in the scheduling method for Pod provided in the embodiment of the present application, by analyzing the local volume resources of each node in the k8s platform, a situation that Pod cannot be started normally due to insufficient storage resources of the local volume of the node to which the Pod is scheduled can be avoided, so as to implement dynamic supply of the local volume resources.
Referring to fig. 3, a method for scheduling Pod provided in the embodiment of the present application includes S301 to S303:
s301, the storage plug-in sends a first request to a container cluster where the Pod to be dispatched is located.
The first request comprises local volume quota information of each node in the Kubernetes platform, and the first request is used for indicating each Pod in the container cluster to determine the local storage resource occupancy amount when the Pod operates.
The storage plug-in may send a first request including local volume quota information to a container cluster where the Pod to be scheduled is located, where the first request is used to instruct each Pod in the container cluster to determine a local storage resource occupancy amount when the Pod itself runs. After each Pod in the container cluster receives the first request, the occupation amount of the local storage resources in the self-running process can be determined, and the determined occupation amount of the local storage resources in the self-running process is written into a Pod Annotation (Annotation). Then, the scheduler may determine the Pod to be scheduled from the container cluster, and then determine the occupied amount of the local storage resource when the Pod to be scheduled runs from the Pod emulation of the Pod to be scheduled.
S302, the memory plug-in receives the local memory resource occupation amount of the self running determined by each Pod.
And S303, the storage plug-in sends the local storage resource occupation amount of the self running determined by each Pod to the scheduler.
Optionally, after the Pod to be scheduled is scheduled to the target node by the scheduler, the storage plug-in may create a target PV at the target node, and use the target PV for persistent storage of the Pod to be scheduled.
In the prior art, the scheduling of Pod and the creation of PV are two independent processes, and in order to enable the target PV and the Pod to be scheduled to the same node, so as to use the target PV for the persistent storage of the Pod to be scheduled, optionally, a binding policy of the target PV and a target persistent volume declaration (PVC) may be specified in a storage class belonging to a local volume created in advance in a container cluster. And the binding policy is waitforfirstConsumer. The target PV in the WaitForFirstConsumer binding strategy can be created after the Pod to be scheduled is scheduled, so that the situation that the Pod to be scheduled cannot be started due to the fact that the target PV and the Pod to be scheduled are not scheduled to the same node can be avoided.
In a possible implementation manner, the target PV and the target PVC in the waitforfirstprovider binding policy may be bound between the PV and the PVC first, after the target node of the Pod to be scheduled is determined, the target PV is created at the target node, and then the Pod to be scheduled may use the local volume resource corresponding to the target PV through the target PVC.
In another possible implementation manner, the target PV and the target PVC in the waitforfirstprovider binding policy may wait for the target node of the Pod to be scheduled to determine, create the target PV at the target node, and then perform binding between the PV and the PVC. In this way, the Pod to be scheduled may use the local volume resource corresponding to the target PV through the target PVC.
Currently, a Logical Volume Manager (LVM) mechanism is a mechanism for managing a disk partition in a Linux environment, and several disks (e.g., physical volumes) may be combined to form a storage pool or a Volume group (Volume group). The LVM may create a new logical device each time a Logical Volume (LV) of a different size is partitioned from a volume group. The underlying original disk is no longer directly controlled by the kernel, but by the LVM layer. For upper layer applications, the volume group becomes the basic unit of data storage instead of the disk block. The LVM manages the physical extents of all physical volumes and maintains the mapping between logical extents and physical extents. The LVM logical device provides the same functions as the physical disk to the upper layer application, such as file system creation and data access. The LVM logical device is not limited by physical constraints, however, the logical volume need not be a contiguous space, it can span many physical volumes, and it can be resized at will at any time. It is easier to manage disk space than a physical disk. Through the LVM mechanism, physical disk resources can be integrated, a logical volume with a specified size is created, and capacity expansion operation is allowed to be performed. Therefore, optionally, in the Pod scheduling method provided in the embodiment of the present application, the storage plug-in may create the target PV at the target node, and then invoke the LVM mechanism to create the local volume (LogicVolume, LV) corresponding to the target PV at the target node, so as to implement quota management on the local volume resources, and create the local volume of the specified size according to the user requirement.
Since the PVC states the storage size and access mode required by the user, optionally, in one possible implementation, the storage plug-in may obtain the request information in the PVC; and then calling an LVM mechanism, and creating an LV corresponding to the target PV at the target node according to the request information.
Wherein the request information includes at least a storage size and an access mode. Certainly, in practical applications, the request information in the PVC may further include other types of information for characterizing the user requirements, which is not limited in this application embodiment.
Optionally, in a possible implementation, the storage plugin is a local volume storage plugin declared in a storage class created by the container cluster. For example, an administrator may pre-create a StorageClass belonging to a local volume in a container cluster. Wherein, the storage plug-in is declared to be a local volume storage plug-in the StorageClass.
Optionally, in a possible implementation manner, the storage plug-in may be a CSI plug-in. Specifically, a CSI plugin for local volume storage may be implemented according to the CSI standard. It is to be understood that, in practical applications, the storage plug-in may also be a local volume storage plug-in implemented based on other types of standards, which is not limited in this embodiment of the present application.
In summary, as shown in fig. 4, step S203 in fig. 2 may be replaced with S2031 to S2032:
s2031, the scheduler determines, from the local volume quota information, a total amount of local volume resources of each node in each node and an occupied amount of local volume resources of each node.
S2032, the scheduler determines the target node from each node according to the total amount of the local volume resources, the occupied amount of the local volume resources and the occupied amount of the local storage resources when the Pod to be scheduled runs.
Alternatively, as shown in fig. 5, step S2032 in fig. 4 may be replaced with S20321 to S20323:
s20321, the scheduler determines the difference between the total amount of the local volume resources of the first node and the occupied amount of the local volume resources of the first node.
S20322, if the difference between the total amount of the local volume resources of the first node and the occupied amount of the local volume resources of the first node is greater than or equal to the occupied amount of the local storage resources when the Pod to be scheduled is running, the scheduler determines the first node as a preselected node.
S20323, the scheduler randomly determines a target node from the preselected nodes.
Optionally, as shown in fig. 6, step S2032 in fig. 4 may also be replaced by S20324-S20325:
s20324, the scheduler obtains the memory resource information of each node and the CPU resource information of the central processing unit.
S20325, the scheduler determines a target node from each node according to the local volume quota information, the local storage resource occupation amount when the Pod to be scheduled runs, the memory resource information and the CPU resource information.
As shown in fig. 7, an embodiment of the present application further provides a scheduler, which includes an obtaining module 11, a determining module 12, and a scheduling module 13. The scheduler may be the scheduler referred to in fig. 1 in the above embodiment.
The obtaining module 11 executes S201 and S202 in the above method embodiment, the determining module 12 executes S203 in the above method embodiment, and the scheduling module 13 executes S204 in the above method embodiment.
The acquisition module 11 is configured to acquire local volume quota information of each node in the Kubernetes platform;
the obtaining module 11 is further configured to obtain a local storage resource occupation amount when the Pod to be scheduled runs;
the determining module 12 is configured to determine a target node from each node according to the local volume quota information and the occupancy of local storage resources when the Pod to be scheduled runs;
and the scheduling module 13 is configured to schedule the Pod to be scheduled to the target node.
Optionally, the determining module 12 is specifically configured to:
determining the total amount of local volume resources of each node in each node and the occupied amount of the local volume resources of each node from the local volume quota information;
and determining a target node from each node according to the total amount of the local volume resources, the occupied amount of the local volume resources and the occupied amount of the local storage resources when the Pod to be scheduled runs.
Optionally, the determining module 12 is specifically configured to:
determining the difference value between the total amount of the local volume resources of the first node and the occupied amount of the local volume resources of the first node; the first node is any one of the nodes;
if the difference is larger than or equal to the occupation amount of the local storage resources when the Pod to be scheduled runs, determining the first node as a preselected node;
and randomly determining a target node from the preselected nodes.
Optionally, the determining module 12 is further specifically configured to:
acquiring memory resource information of each node and CPU resource information of a central processing unit;
and determining a target node from each node according to the local volume quota information, the local storage resource occupation amount when the Pod to be scheduled runs, the memory resource information and the CPU resource information.
Optionally, the determining module 12 is further specifically configured to:
and according to the running state of each Pod, selecting the local storage resource occupation amount when the Pod to be scheduled runs from the local storage resource occupation amount when the Pod to be scheduled runs.
Optionally, the scheduler further includes a sending module, and the sending module is configured to: and sending the address information of the target node to the storage plug-in.
Optionally, the scheduler provided in the embodiment of the present application may further include a storage module, where the storage module is configured to store a program code of the scheduler, and the like.
As shown in fig. 8, an embodiment of the present application further provides a storage card, which includes a sending module 21 and a receiving module 22. The storage plug-in may be the storage plug-in referred to in fig. 1 in the above embodiments.
The sending module 21 executes S301 and S303 in the above method embodiment, and the receiving module 22 executes S302 in the above method embodiment.
A sending module 21, configured to send a first request to a container cluster where a Pod to be scheduled is located; the first request comprises local volume quota information of each node in a Kubernetes platform, and the first request is used for indicating each Pod in the container cluster to determine the local storage resource occupancy amount when the Pod operates;
the receiving module 22 is configured to receive the local storage resource occupation amount when each Pod determines that the Pod itself operates;
and a sending module 21, configured to send the local storage resource occupation amount of the self run determined by each Pod to the scheduler.
Optionally, in a possible design manner, the storage plug-in provided by the present application further includes a creation module;
the receiving module 22 is further configured to receive address information of the target node sent by the scheduler;
and the creating module is used for creating the target persistent volume PV at the target node according to the address information and creating the local volume LV corresponding to the target PV at the target node.
Optionally, in a possible design manner, the creating module is specifically configured to: acquiring request information in a persistent volume statement PVC; the request information at least comprises a storage size and an access mode;
and calling an LVM mechanism, and creating an LV corresponding to the target PV at the target node according to the request information.
Optionally, in one possible design, the storage plugin is a local volume storage plugin declared in a storage class created by the container cluster.
Optionally, in a possible design, the storage plug-in is a container storage interface CSI plug-in.
Optionally, the storage plug-in provided in the embodiment of the present application may further include a storage module, where the storage module is used to store the program code of the storage plug-in, and the like.
As shown in fig. 9, an embodiment of the present application further provides a Pod scheduling apparatus, which includes a memory 41, a processor 42, a bus 43, and a communication interface 44; the memory 41 is used for storing computer execution instructions, and the processor 42 is connected with the memory 41 through a bus 43; when the scheduling means of the Pod is running, the processor 42 executes computer-executable instructions stored in the memory 41 to cause the scheduling means of the Pod to perform the scheduling method applied to the memory plug-in as provided in the above embodiments.
In particular implementations, processor 42(42-1 and 42-2) may include one or more Central Processing Units (CPUs), such as CPU0 and CPU1 shown in FIG. 9, as one example. And as an example, the scheduling means of the Pod may include a plurality of processors 42, such as the processor 42-1 and the processor 42-2 shown in fig. 9. Each of the processors 42 may be a single-Core Processor (CPU) or a multi-Core Processor (CPU). Processor 42 may refer herein to one or more devices, circuits, and/or processing cores that process data (e.g., computer program instructions).
The memory 41 may be, but is not limited to, a read-only memory 41 (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 41 may be self-contained and coupled to the processor 42 via a bus 43. The memory 41 may also be integrated with the processor 42.
In a specific implementation, the memory 41 is used for storing data in the present application and computer-executable instructions corresponding to software programs for executing the present application. The processor 42 may invoke various functions of the scheduling means of Pod by running or executing software programs stored in the memory 41, and invoking data stored in the memory 41.
The communication interface 44 is any device, such as a transceiver, for communicating with other devices or communication networks, such as a control system, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), and the like. The communication interface 44 may include a receiving unit implementing a receiving function and a transmitting unit implementing a transmitting function.
The bus 43 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an extended ISA (enhanced industry standard architecture) bus, or the like. The bus 43 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
As an example, in connection with fig. 8, the function implemented by the receiving module in the storage plug-in is the same as the function implemented by the receiving unit in fig. 9, and the function implemented by the storage module in the storage plug-in is the same as the function implemented by the memory in fig. 9.
For the explanation of the related contents in this embodiment, reference may be made to the above method embodiments, which are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
An embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by a computer, the computer is enabled to execute the scheduling method applied to the Pod of the storage plugin, which is provided in the foregoing embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM), a register, a hard disk, an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, any suitable combination of the foregoing, or any other form of computer readable storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. A method for scheduling Pod, applied to a scheduler, includes:
acquiring local volume quota information of each node in a Kubernetes platform;
acquiring the occupation amount of local storage resources when the Pod to be scheduled runs;
determining a target node from each node according to the local volume quota information and the occupancy of local storage resources when the Pod to be scheduled runs;
and scheduling the Pod to be scheduled to the target node.
2. The method for scheduling a Pod according to claim 1, wherein the determining a target node from the nodes according to the local volume quota information and a local storage resource occupancy amount when the Pod to be scheduled runs comprises:
determining the total amount of the local volume resources of each node in each node and the occupied amount of the local volume resources of each node from the local volume quota information;
and determining the target node from each node according to the total amount of the local volume resources, the occupied amount of the local volume resources and the occupied amount of the local storage resources when the Pod to be scheduled runs.
3. The method for scheduling Pod of claim 2, wherein the determining the target node from the nodes according to the total amount of local volume resources, the occupied amount of local volume resources, and the occupied amount of local storage resources when the Pod to be scheduled runs comprises:
determining a difference value between the total amount of the local volume resources of a first node and the occupied amount of the local volume resources of the first node; the first node is any one of the nodes;
if the difference is larger than or equal to the occupation amount of the local storage resources when the Pod to be scheduled runs, determining the first node as a preselected node;
and randomly determining the target node from the preselected nodes.
4. The method for scheduling a Pod according to claim 1, wherein the determining a target node from the nodes according to the local volume quota information and a local storage resource occupancy amount when the Pod to be scheduled runs, further comprises:
acquiring memory resource information of each node and CPU resource information of a central processing unit;
and determining a target node from each node according to the local volume quota information, the local storage resource occupation amount of the Pod to be scheduled in operation, the memory resource information and the CPU resource information.
5. The method for scheduling Pod of claim 1, wherein the obtaining of the local storage resource occupancy when the Pod to be scheduled is running comprises:
acquiring local storage resource occupation amount of each Pod in the container cluster during self operation from the storage plug-in;
and selecting the local storage resource occupation amount of the Pod operation to be scheduled from the local storage resource occupation amounts of the Pods in operation determined by the Pods.
6. The method of claim 5, wherein the selecting the local storage resource occupancy of the Pod runtime to be scheduled from the local storage resource occupancy of the runtime itself determined by each Pod comprises:
and according to the running state of each Pod, selecting the local storage resource occupation amount of the Pod to be scheduled when running from the local storage resource occupation amount of each Pod determined when running.
7. The method of scheduling Pod of claim 1, wherein after the determining the target node from the nodes, the method further comprises:
and sending the address information of the target node to a storage plug-in.
8. A method for scheduling Pod, applied to a memory card, includes:
sending a first request to a container cluster where the Pod to be scheduled is located; the first request comprises local volume quota information of each node in a Kubernetes platform, and the first request is used for indicating each Pod in the container cluster to determine the local storage resource occupancy amount when the Pod operates;
receiving the local storage resource occupation amount of each Pod in operation;
and sending the local storage resource occupation amount of the self running determined by each Pod to a scheduler.
9. The method for scheduling Pod of claim 8, wherein the method further comprises:
receiving address information of a target node sent by a scheduler;
and creating a target persistent volume PV at the target node according to the address information, and creating a local volume LV corresponding to the target PV at the target node.
10. The method for scheduling Pod of claim 9, wherein the creating a target persistent volume PV at the target node according to the address information and a local volume LV corresponding to the target PV at the target node comprises:
and creating the target PV at the target node, and calling a Logical Volume Management (LVM) mechanism to create the LV corresponding to the target PV at the target node.
11. The method for scheduling of Pod of claim 10, wherein the invoking a Logical Volume Management (LVM) mechanism creates an LV at the target node corresponding to the target PV, comprising:
acquiring request information in a persistent volume statement PVC; the request information at least comprises a storage size and an access mode;
and calling the LVM mechanism, and creating the LV corresponding to the target PV at the target node according to the request information.
12. The method for scheduling Pod of any of claims 8-11, wherein the storage plugin is a local volume storage plugin declared in a storage class created by a container cluster.
13. The method for scheduling Pod of claim 12, wherein the storage plugin is a Container Storage Interface (CSI) plugin.
14. A scheduler, comprising: the system comprises an acquisition module, a determination module and a scheduling module;
the acquisition module is used for acquiring local volume quota information of each node in a Kubernetes platform;
the acquisition module is also used for acquiring the occupation amount of local storage resources when the Pod to be scheduled runs;
the determining module is used for determining a target node from each node according to the local volume quota information and the occupancy amount of local storage resources when the Pod to be scheduled runs;
the scheduling module is configured to schedule the Pod to be scheduled to the target node.
15. A memory card, comprising: a transmitting module and a receiving module;
the sending module is used for sending a first request to a container cluster where the Pod to be scheduled is located; the first request comprises local volume quota information of each node in a Kubernetes platform, and the first request is used for indicating each Pod in the container cluster to determine the local storage resource occupancy amount when the Pod operates;
the receiving module is used for receiving the local storage resource occupation amount when each Pod determines the operation of the Pod;
and the sending module is used for sending the local storage resource occupation amount of the self running determined by each Pod to the scheduler.
16. A Pod scheduling system includes a container cluster, a scheduler, and a storage plugin.
CN202110282885.9A 2021-03-16 2021-03-16 Pod scheduling method, scheduler, memory plug-in and system Pending CN113010265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110282885.9A CN113010265A (en) 2021-03-16 2021-03-16 Pod scheduling method, scheduler, memory plug-in and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110282885.9A CN113010265A (en) 2021-03-16 2021-03-16 Pod scheduling method, scheduler, memory plug-in and system

Publications (1)

Publication Number Publication Date
CN113010265A true CN113010265A (en) 2021-06-22

Family

ID=76408594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110282885.9A Pending CN113010265A (en) 2021-03-16 2021-03-16 Pod scheduling method, scheduler, memory plug-in and system

Country Status (1)

Country Link
CN (1) CN113010265A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641311A (en) * 2021-10-18 2021-11-12 浩鲸云计算科技股份有限公司 Method and system for dynamically allocating container storage resources based on local disk
CN113961314A (en) * 2021-12-16 2022-01-21 苏州浪潮智能科技有限公司 Container application scheduling method and device, electronic equipment and storage medium
CN114090176A (en) * 2021-11-19 2022-02-25 苏州博纳讯动软件有限公司 Kubernetes-based container scheduling method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641311A (en) * 2021-10-18 2021-11-12 浩鲸云计算科技股份有限公司 Method and system for dynamically allocating container storage resources based on local disk
CN114090176A (en) * 2021-11-19 2022-02-25 苏州博纳讯动软件有限公司 Kubernetes-based container scheduling method
CN113961314A (en) * 2021-12-16 2022-01-21 苏州浪潮智能科技有限公司 Container application scheduling method and device, electronic equipment and storage medium
WO2023109015A1 (en) * 2021-12-16 2023-06-22 苏州浪潮智能科技有限公司 Container application scheduling method and apparatus, electronic device and storage medium

Similar Documents

Publication Publication Date Title
US20220078078A1 (en) Fpga-enabled compute instances
Chen et al. Enabling FPGAs in the cloud
US9183378B2 (en) Runtime based application security and regulatory compliance in cloud environment
US8874457B2 (en) Concurrent scheduling of plan operations in a virtualized computing environment
CN113243005A (en) Performance-based hardware emulation in on-demand network code execution systems
CN113010265A (en) Pod scheduling method, scheduler, memory plug-in and system
US8930957B2 (en) System, method and program product for cost-aware selection of stored virtual machine images for subsequent use
JP2014520346A5 (en)
CN102667714B (en) Support the method and system that the function provided by the resource outside operating system environment is provided
US11546431B2 (en) Efficient and extensive function groups with multi-instance function support for cloud based processing
CN113760543A (en) Resource management method and device, electronic equipment and computer readable storage medium
US11061746B2 (en) Enqueue-related processing based on timing out of an attempted enqueue
US9021492B2 (en) Dual mode reader writer lock
US8799625B2 (en) Fast remote communication and computation between processors using store and load operations on direct core-to-core memory
CN105677481A (en) Method and system for processing data and electronic equipment
CN115129250A (en) Object storage method and device and readable storage medium
CN112217654B (en) Service resource license management method and related equipment
CN113238842A (en) Task execution method and device and storage medium
US10585736B2 (en) Incremental dump with fast reboot
US11765236B2 (en) Efficient and extensive function groups with multi-instance function support for cloud based processing
WO2023274014A1 (en) Storage resource management method, apparatus, and system for container cluster
CN117349009A (en) Cluster node determining method and device
US20120124298A1 (en) Local synchronization in a memory hierarchy
CN114996000A (en) Control group management method and system
CN115421871A (en) Method and device for dynamically allocating hardware resources of system and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination