CN113645300A - Node intelligent scheduling method and system based on Kubernetes cluster - Google Patents

Node intelligent scheduling method and system based on Kubernetes cluster Download PDF

Info

Publication number
CN113645300A
CN113645300A CN202110915567.1A CN202110915567A CN113645300A CN 113645300 A CN113645300 A CN 113645300A CN 202110915567 A CN202110915567 A CN 202110915567A CN 113645300 A CN113645300 A CN 113645300A
Authority
CN
China
Prior art keywords
node
container
nodes
mirror image
container mirror
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110915567.1A
Other languages
Chinese (zh)
Other versions
CN113645300B (en
Inventor
张潇
潘远航
汪慧
李中军
游洪莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Daoke Network Technology Co ltd
Original Assignee
Shanghai Daoke Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Daoke Network Technology Co ltd filed Critical Shanghai Daoke Network Technology Co ltd
Priority to CN202110915567.1A priority Critical patent/CN113645300B/en
Publication of CN113645300A publication Critical patent/CN113645300A/en
Application granted granted Critical
Publication of CN113645300B publication Critical patent/CN113645300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a Kubernetes cluster-based intelligent node scheduling method and system. The method comprises the following steps: determining a CPU architecture and an operating system adapted to the container mirror image according to the configuration file of the container mirror image; acquiring labels of nodes in a Kubernetes cluster; screening nodes in the Kubernetes cluster according to the labels to obtain adaptive nodes; the CPU architecture and the operating system corresponding to the adaptation node are adapted to the container mirror image; and selecting a scheduling node from the adaptation nodes, and deploying the container mirror on the scheduling node. Therefore, intelligent scheduling of nodes in the Kubernetes cluster with multiple operating systems and heterogeneous architectures is achieved, the container mirror image is conveniently and efficiently deployed on the correct nodes in the Kubernetes cluster, normal operation of the container mirror image after deployment can be guaranteed hundreds of percent, and the possibility of failure in container mirror image deployment is effectively avoided.

Description

Node intelligent scheduling method and system based on Kubernetes cluster
Technical Field
The application relates to the technical field of cloud computing, in particular to a Kubernetes cluster-based intelligent node scheduling method and system.
Background
At the present stage, the nation vigorously develops and establishes a Credit-creation ecological system to guarantee the strategy of the strong nation; localization and unified management of the underlying infrastructure become very important requirements, and particularly in the field of CPUs and operating systems, localization production and autonomous controllability of technology are great trends, and a large number of domestic architecture CPUs and domestic operating systems will appear. Meanwhile, the traditional X86 architecture CPU and windows operating system still occupy a large market share, so that a situation that multiple architectures and multiple operating systems coexist will occur. Based on a Kubernetes system, an enterprise can bring nodes of different CPU architectures and different operating systems into the same cluster for management, and build a mixed cluster of a plurality of operating systems and a heterogeneous architecture. However, for container images running on nodes, each container image can only run on nodes of the adapted CPU architecture and operating system. That is, container images do not enable compatibility with multiple operating systems and heterogeneous architectures. The reason is that the construction of the container image must be based on the base image, and the existing properties of the base image determine the CPU architecture and operating system on which the container image can run. For this reason, if the kubernets system schedules container images onto incompatible nodes, the container images will not function properly.
Therefore, in a hybrid cluster with multiple operating systems and heterogeneous architectures, a technical solution for ensuring that a container image can be scheduled to a correct node is needed.
Disclosure of Invention
The present application aims to provide a Kubernetes cluster-based node intelligent scheduling method and system, so as to solve or alleviate the problems in the prior art.
In order to achieve the above purpose, the present application provides the following technical solutions:
the application provides a Kubernetes cluster-based intelligent node scheduling method, which comprises the following steps: determining a CPU architecture and an operating system adapted to the container mirror image according to the configuration file of the container mirror image; acquiring labels of nodes in a Kubernetes cluster; the label is used for identifying a CPU architecture and an operating system corresponding to the node; screening the nodes in the Kubernetes cluster according to the label to obtain an adaptive node; the CPU architecture and the operating system corresponding to the adaptation node are adapted to the container mirror image; selecting a scheduling node from the adaptation nodes and deploying the container image on the scheduling node.
Preferably, the determining the CPU architecture and the operating system adapted to the container image according to the configuration file of the container image includes: analyzing the configuration file of the container mirror image to obtain a mirror image warehouse address of the container mirror image; accessing a mirror repository of the container mirror; the mirror image warehouse of the container mirror image stores metadata information of the container mirror image; and determining the CPU architecture and the operating system adapted to the container mirror image according to the metadata information of the container mirror image.
Preferably, the mirror image warehouse of the container mirror image provides services to the outside through an open interface specification, and correspondingly, the accessing the mirror image warehouse of the container mirror image includes: and based on the open interface specification, requesting an instruction to access the metadata information storage file of the container mirror in the mirror repository of the container mirror through a hypertext transfer protocol.
Preferably, the screening, according to the label, the nodes in the kubernets cluster to obtain an adapted node includes: respectively sequencing the CPU architecture and the operating system according to a preset rule; and according to the sequence of the sequencing result, screening the nodes in the Kubernetes cluster according to the corresponding CPU architecture and the type of the operating system so as to obtain the adaptive nodes.
Preferably, the screening, according to the label, the nodes in the kubernets cluster to obtain an adapted node further includes: when the adaptation node does not exist in the Kubernets cluster, suspending the container mirror image from being deployed in the Kubernets cluster; and incorporating the new node corresponding to the CPU architecture and the operating system adapted to the container mirror image into the Kubernetes cluster as the adaptation node.
Preferably, the selecting a scheduling node from the adaptation nodes and deploying the container image on the scheduling node includes: setting resource requirements required by the container mirror image operation; screening the adaptive nodes according to the resource requirement required by the container mirror image operation to determine operable nodes; scoring each operable node, and determining the scheduling node according to a scoring result; deploying the container image on the scheduling node.
Preferably, the setting of the resource requirement required by the container mirroring operation includes: creating a container group of the container mirror operation; and setting resource requirements in the configuration file of the container group as the resource requirements required by the container mirror image operation.
Preferably, the deploying the container mirror on the scheduling node includes: writing the mark information of the scheduling node in the configuration file of the container group; and deploying the container group on the scheduling node according to the marking information of the scheduling node.
Preferably, the screening the adaptation nodes according to the resource requirement required by the container mirror image operation to determine the operable nodes further includes: when the runnable node does not exist in the Kubernetes cluster, adding a scheduling task of the container mirror image into a task queue; and binding the scheduling task with the adaptation node.
An embodiment of the present application further provides a node intelligent scheduling system based on a Kubernetes cluster, including: the determining module is configured to determine the CPU architecture and the operating system adapted to the container mirror image according to the configuration file of the container mirror image; the acquisition module is configured to acquire labels of nodes in a Kubernetes cluster; the label is used for identifying a CPU architecture and an operating system corresponding to the node; the screening module is configured to screen the nodes in the Kubernetes cluster according to the node labels to obtain adaptive nodes; the CPU architecture and the operating system corresponding to the adaptation node are adapted to the container mirror image; a selection and deployment module configured to select scheduling nodes from the adaptation nodes and deploy the container image on the scheduling nodes.
Compared with the closest prior art, the technical scheme of the embodiment of the application has the following beneficial effects:
in the technical scheme provided by the application, a CPU architecture and an operating system adaptive to a container mirror image are determined according to a configuration file of the container mirror image; then, according to the labels of the nodes in the Kubernetes cluster, the nodes in the Kubernetes cluster are screened, and the CPU architecture and the nodes corresponding to the operating system which are adaptive to the container mirror image are used as adaptive nodes; and finally, selecting a scheduling node from the adaptation nodes in the Kubernetes cluster, and deploying the container mirror image on the scheduling node. Therefore, intelligent scheduling of nodes in the Kubernetes cluster with multiple operating systems and heterogeneous architectures is achieved, the container mirror image is conveniently and efficiently deployed on the correct nodes in the Kubernetes cluster, normal operation of the container mirror image after deployment can be guaranteed hundreds of percent, and the possibility of failure in container mirror image deployment is effectively avoided.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. Wherein:
fig. 1 is a schematic flowchart of a kubernets cluster-based node intelligent scheduling method according to some embodiments of the present application;
fig. 2 is a schematic diagram illustrating an example of a kubernets cluster-based node intelligent scheduling method according to some embodiments of the present application;
fig. 3 is a schematic flow chart of step S101 provided according to some embodiments of the present application;
fig. 4 is a schematic flow chart of step S103 provided according to some embodiments of the present application;
fig. 5 is a schematic flow chart of step S104 provided according to some embodiments of the present application;
fig. 6 is a schematic structural diagram of a kubernets cluster-based node intelligent scheduling system according to some embodiments of the present application;
fig. 7 is a schematic diagram of the operation of a kubernets cluster-based node intelligent scheduling system according to some embodiments of the present application;
FIG. 8 is a schematic block diagram of a determination module provided in accordance with some embodiments of the present application;
FIG. 9 is a schematic structural diagram of a screening module provided in accordance with some embodiments of the present application;
fig. 10 is a block diagram of a selection and deployment module according to some embodiments of the present application.
Detailed Description
The present application will be described in detail below with reference to the embodiments with reference to the attached drawings. The various examples are provided by way of explanation of the application and are not limiting of the application. In fact, it will be apparent to those skilled in the art that modifications and variations can be made in the present application without departing from the scope or spirit of the application. For instance, features illustrated or described as part of one embodiment, can be used with another embodiment to yield a still further embodiment. It is therefore intended that the present application cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
In order to enable a container mirror image to be deployed on a correct node, currently, an enterprise realizes application scheduling and management through a node selector (node selector) or node affinity (node affinity) based on a native kubernets system; in the process, the needed containers are numerous in mirror images, scheduling is realized in a way of manually editing the containers (Pod) yaml, the workload is huge, the efficiency is low, and errors are easy to occur; moreover, with the explosive increase of the container mirror image of the enterprise and the change of external factors, in the aspect of scheduling and managing the container, development, operation and maintenance personnel need to maintain a huge version information table of the container mirror image, and need to inquire the architecture and the operating system information of the container mirror image when creating application, and manually specify the correct node affinity; moreover, when version iteration occurs to the container mirror image, the supported CPU architecture and operating system may change, which may cause the affinity of the designated node to fail and the scheme to malfunction. Thus, in an enterprise-level scenario, the existing kubernets system has no way to ensure that an application can be conveniently and efficiently deployed to the correct node in a container cluster.
Fig. 1 is a schematic flowchart of a kubernets cluster-based node intelligent scheduling method according to some embodiments of the present application; fig. 2 is a schematic diagram illustrating an example of a kubernets cluster-based node intelligent scheduling method according to some embodiments of the present application; as shown in fig. 1 and 2, the kubernets cluster-based node intelligent scheduling method includes:
step S101, determining a CPU architecture and an operating system adapted to the container mirror image according to the configuration file of the container mirror image.
In the embodiment of the present application, information of the CPU architecture and the operating system of the container mirror image is stored in a mirror image repository of the container mirror image, and information of the CPU architecture and the operating system of the container mirror image is acquired by accessing the mirror image repository using yaml configuration files of the container mirror image, so as to determine the CPU architecture and the operating system adapted to the container mirror image.
Fig. 3 is a schematic flow chart of step S101 provided according to some embodiments of the present application; as shown in fig. 3, determining the CPU architecture and operating system adapted to the container image according to the configuration file of the container image includes:
step S111, analyzing the configuration file of the container mirror image to obtain a mirror image warehouse address of the container; the container mirror image warehouse stores metadata information of the container mirror image.
In the embodiment of the present application, the address of the mirror repository of the container mirror is obtained by parsing the yaml configuration file of the container mirror. Specifically, relevant information of the container image, such as developer information of the container image, information of an image warehouse storing the container image, version iteration information of the container image, and the like, is recorded in the yaml configuration file of the container image.
Step S121, accessing a mirror image warehouse address of the container mirror image; the mirror image warehouse of the container mirror image stores metadata information of the container mirror image.
It should be noted that the mirror repository may be a public mirror repository or a private mirror repository, and accordingly, the container mirror is a public mirror or a private mirror. When the container mirror image is the public mirror image stored in the public mirror image warehouse, any user can access the public mirror image warehouse and acquire the files stored in the public mirror image warehouse without authorization permission of developers of the container mirror image and owners of the container mirror image warehouse; when the container mirror image is a private mirror image stored in the private mirror image warehouse, if the private mirror image warehouse is to be accessed, the authorized permission of a developer of the container mirror image or an owner of the container mirror image warehouse needs to be obtained, and the address of the container mirror image warehouse can be accessed, and specifically, the authorized permission information can be obtained from the yaml configuration file of the private mirror image.
In the embodiment of the application, the container images are stored in the container image warehouse and used by a user through pulling, and each container image warehouse provides an open interface specification (OpenAPI) and provides services for the outside, so that the user can operate the relevant files of the container images stored in the container image warehouse based on the OpenAPI.
In a specific example, the mirror repository of the container mirror provides services to the outside through an open interface specification, and correspondingly, accessing the mirror repository of the container mirror includes: and based on the open interface specification, requesting an instruction to access the metadata information storage file of the container image in the image warehouse of the container image through a hypertext transfer protocol.
In the embodiment of the present application, an address of a container mirror repository is accessed through a hypertext Transfer Protocol (HTTP) request, and metadata information of a container mirror stored in the container mirror repository is acquired. For example, in order to obtain metadata information of a container image stored in a mirror warehouse (Docker Hub), an http get request may be sent to a path (/ images/{ name }/json, { name } refers to a name of the container image) in the Docker Hub, where the metadata information of the container image is stored, that is, a metadata information storage file of the container image stored in the Docker Hub may be received, where a data format of the received metadata information of the container image is a json format, and the following details are shown:
{
"Id":"sha256:digest",
"Container":"********",
"OS":"linux",
"Architecture":"amd64",
""
...,
""
...,
}
it can be seen that the metadata information includes information of the operating system and the CPU architecture supported by the container image.
Step S131, determining a CPU architecture and an operating system for adapting the container according to the metadata information of the container mirror image.
In the embodiment of the application, after the http request accesses the address of the container mirror image warehouse and the metadata information of the container mirror image stored in the container mirror image warehouse is acquired, the CPU architecture and the operating system adaptive to the container mirror image can be automatically determined through the metadata information.
In the implementation of the application, the container group of container mirror image operation is created in node scheduling through a node selector (node selector) or node affinity (node affinity) of a native Kubernets system, the yaml file of the container group is manually edited, the node affinity is specified, and a CPU framework and an operating system which meet the requirement of container mirror image operation are set.
S102, acquiring a label of a node in a Kubernetes cluster; the labels are used for representing the CPU architecture and the operating system corresponding to the nodes.
In the embodiment of the application, kubernets can automatically know the CPU architecture and the operating system of each node added into the cluster, and automatically mark each node with a corresponding label. That is, in the kubernets cluster, there are Linux operating system, amd 64-structured node, Linux operating system, ppc64 le-structured node, Linux operating system, s 390X-structured node, Windows Server 2019-structured node, and X86-structured node; kubernets can automatically know the CPU architecture and operating system of each node and label each node with a label corresponding to its CPU architecture and operating system.
S103, screening nodes in the Kubernetes cluster according to the labels to obtain adaptive nodes; the CPU architecture and the operating system corresponding to the adaptation node are adapted to the container mirror.
In the embodiment of the present application, the nodes in the kubernets cluster are screened according to the CPU architecture adapted to the container image and the label of the node corresponding to the operating system, so as to screen out a correct node (i.e., an adaptation node). The nodes in the Kubernetes cluster are screened according to the CPU architecture and the operating system supported by the container mirror images based on the preset screening rules, and the container mirror images are deployed on the correct nodes, so that the container mirror images can work normally in hundreds of percent after being deployed, and the problem of failure in container mirror image deployment is fundamentally solved.
Fig. 4 is a schematic flow chart of step S103 provided according to some embodiments of the present application; as shown in fig. 4, screening nodes in a kubernets cluster according to the label to obtain an adapted node includes:
and S113, respectively sequencing the CPU architecture and the operating system according to preset rules.
One possible implementation is that the preset rules refer to the number of container images supporting different CPU architectures and operating systems in the image repository.
It should be understood that in the embodiment of the present application, the nodes in the kubernets cluster need to be screened according to the CPU architecture and the operating system supported by the container image, and therefore the screening criteria of the nodes need to be determined according to the CPU architecture and the operating system supported by the container image to be deployed.
For the container mirror image to be deployed, the kubernets cluster cannot predict the CPU architecture and the operating system supported by the kubernets cluster, so before the nodes are screened, the screening standard needs to be determined.
That is, the CPU architecture and the operating system supported by the container image need to be determined first, and the nodes need to be screened according to the determination result. Probabilistically, the greater the number of container images in the image repository that support the type of CPU architecture and operating system, the greater the probability that the randomly selected container image supports the type of CPU architecture and operating system.
And S123, screening the nodes in the Kubernetes cluster according to the sequence of the sequencing result and the type of the corresponding CPU architecture and the operating system to obtain the adaptive nodes.
For example, in the mirror image warehouse, the number of container mirror images supporting the Linux operating system is much larger than that of container mirror images supporting the Windows operating system, so that it is determined preferentially whether the container mirror images support the Linux operating system, and then it is determined sequentially whether the container mirror images support the Windows Server 2016, the Windows Server 2019, the Windows Server 1703, and the Windows Server 1809 operating systems, so as to improve the determination efficiency of the operating systems supported by the container mirror images. After the operating system type supported by the container mirror image is determined, the operating system type is used as a screening standard to screen the nodes in the Kubernets cluster.
Similarly, in the mirror repository, the number of container mirrors supporting the X86 architecture is the largest, so it is determined preferentially whether the container mirror supports the X86 architecture, and then it is determined sequentially whether the container mirror supports the arm architecture, the arm64 architecture, the amd architecture, the s390X architecture, and the ppc64le architecture, so as to improve the determination efficiency of the CPU architecture supported by the container mirror. After the CPU architecture type supported by the container mirror image is determined, the CPU architecture type is used as a screening standard to screen the nodes in the Kubernetes cluster. Therefore, the screening speed of the nodes of the operating system and the CPU framework which are adaptive to the container mirror image can be effectively improved, and the screening efficiency is improved.
In the embodiment of the present application, two screening methods may be adopted in the process of screening nodes in a kubernets cluster according to a CPU architecture adapted to a container mirror image and a label of a node corresponding to an operating system: in the first screening mode, the nodes of the CPU architecture and the operating system which are not adaptive to the container mirror image are filtered, and finally the rest nodes are correct nodes; in the second screening mode, the nodes of the CPU structure and the operating system which are simultaneously matched with the container mirror image are screened out, and the screened nodes are correct nodes. The data processing amount of the first screening method is smaller than that of the second screening method, and the screening speed is higher.
As shown in table 1, based on a preset screening rule, the nodes in the Kubernetes cluster are screened to screen out the correct nodes.
Figure BDA0003205476530000091
Table 1 screening nodes in Kubernetes cluster based on preset screening rules
In some optional embodiments, screening nodes in the kubernets cluster according to the label to obtain an adapted node, further includes:
and S133, when the adaptive node does not exist in the Kubernets cluster, suspending the container mirror image to be deployed in the Kubernets cluster.
In the embodiment of the application, after the nodes in the kubernets cluster are screened based on the preset screening rule, if the kubernets cluster does not have the correct nodes, the deployment process of the container mirror image in the kubernets cluster is suspended, and a user is reminded that the container mirror image cannot be deployed and normally run in the current kubernets cluster.
And S143, incorporating the new node corresponding to the CPU architecture and the operating system of the container mirror image adaptation into the Kubernetes cluster as an adaptation node.
Based on the foregoing description, it can be known that the container image can only run on the nodes of the adapted CPU architecture and operating system, and if no adaptation node exists in the current kubernets cluster and the container image must be deployed in the current kubernets cluster, only a new node adapted to the container image can be incorporated into the current kubernets cluster as an adaptation node. At this time, the new node serves as a unique adaptation node in the Kubernets cluster, and other applications or services are not deployed, so that the new node does not need to perform further screening and grading work, and can be directly used as a scheduling node for deploying container mirror images.
It should be noted that, in order to enable a kubernets cluster to automatically complete the work of incorporating a new node, in the embodiment of the present application, a node backup repository is established for the kubernets cluster, a node of a minor operating system and a minor CPU architecture is stored in the node backup repository, and a node in the node backup repository is not deployed with an application or service and is in a dormant state.
Further, to reduce the number of node backups, multiple kubernets clusters may share a node backups.
And step S104, selecting a scheduling node from the adaptation nodes, and deploying the container mirror image on the scheduling node.
In the embodiment of the application, the nodes in the kubernets cluster are screened according to the labels, and the obtained operating system and CPU architecture of the adaptation node are the CPU architecture and operating system which can be adapted to the container mirror image, but the container mirror image can be deployed on the adaptation node only if the hardware resources of the adaptation node can meet the resource requirements of the container mirror image.
Fig. 5 is a schematic flow chart of step S104 provided according to some embodiments of the present application; as shown in fig. 5, selecting a scheduling node from the adaptation nodes and deploying a container image on the scheduling node includes:
and step S114, setting resource requirements required by container mirror image operation.
Specifically, a container group for container mirroring operation is created, and a resource requirement is set in a configuration file of the container group as a resource requirement required for the container mirroring operation.
In the embodiment of the application, a static container mirror image needs to be operated in a container group to provide service to the outside, and in the process of deploying the container mirror image in the kubernets cluster, the kubernets cluster automatically creates the container group for the container mirror image to operate the container mirror image, and then deploys the container group on the scheduling node to complete the deployment of the container mirror image. It should be understood that the scheduling node directly interacts with the container group, the resource provided by the scheduling node to the container group is actually the resource provided to the container mirroring run, and the scheduling node allocates the relevant resource to the container group according to the resource requirement set in the yaml configuration file of the container group, so that the resource requirement required by the container mirroring run needs to be set in the yaml configuration file of the container group.
And step S124, screening the adaptive nodes according to the resource requirement required by the container mirror image operation to determine the operable nodes.
In the embodiment of the application, the adaptation nodes are screened according to the resource requirements required by the container mirror image operation set in the container group, so that the correct and appropriate nodes (i.e. the operable nodes) are screened out, the resource requirements of the container mirror image operation can be met on the correct and appropriate nodes, and the container mirror image can be deployed on the correct and appropriate nodes.
And S134, scoring each operable node, and determining a scheduling node according to a scoring result.
In the embodiment of the application, the operable nodes screened from the adaptation nodes are scored, and the operable node with the highest score is selected as the scheduling node. And if a plurality of operable nodes with the highest scores exist, randomly selecting one of the operable nodes with the highest scores as the scheduling node.
In the embodiment of the application, when the operable nodes are scored, the operable nodes can be scored in a multi-dimensional weighting scoring mode, that is, the operable nodes are weighted and scored according to the calculation power of a CPU (central processing unit), the memory capacity and the hard disk capacity of the operable nodes. Specifically, the remaining hardware resources of the runnable node can be calculated, and the absolute data of the remaining CPU computational power, memory capacity, and hard disk capacity are converted into corresponding scores as one or more dimensions of the runnable node weighted score; the percentage of the remaining hardware resources of the operable nodes can be calculated, and the percentages of the remaining CPU computing power, the memory capacity and the hard disk capacity are converted into corresponding scores to be used as one or more dimensions of the operable node weighted scores; and calculating the supply and demand goodness of fit of hardware resources of the runnable node and the container mirror image, for example, the container mirror image deployed on the runnable node occupies a large amount of hard disk resources, but does not occupy a large amount of computing power and memory capacity of a CPU, so that different hardware resources left on the runnable node are unbalanced, and then the situations that the hard disk capacity is completely occupied and the computing power and the memory capacity of the CPU are left in a large amount may occur on the runnable node. Therefore, the capacity requirement of a hard disk is low when the operable nodes are deployed, the CPU computing power and the content occupation amount are high, the full utilization of hardware resources can be realized, the coincidence degree of the hardware resources and the CPU computing power and the content occupation amount is extremely high, and the supply-demand coincidence degree of the hardware resources can be used as one dimension of the operable node weighting score.
Step S144, deploying the container mirror on the scheduling node.
Specifically, the label information of the scheduling node is written in the configuration file of the container group; and deploying the container mirror image on the scheduling node according to the marking information of the scheduling node.
In the embodiment of the application, deployment of the container mirror image only needs to write the label information such as the name or the ID of the scheduling node into the configuration file of the container mirror image, and the Kubernetes cluster automatically completes scheduling of the scheduling node and deployment of the container mirror image according to the configuration file of the container mirror image. When version iteration occurs to the container mirror image, and a supported CPU architecture and an operating system are changed, the Kubernetes cluster can automatically screen out correct and proper nodes (namely scheduling nodes) according to the CPU architecture and the operating system supported by the container mirror image to deploy the container mirror image, so that the container mirror image can work normally in a hundred percent after deployment, and the problem of failure in container mirror image deployment is fundamentally solved.
In some optional embodiments, screening the adapted nodes according to the resource requirement required for the container mirroring operation to determine an operable node, further includes: when no runnable node exists in the Kubernetes cluster, adding a scheduling task of the container mirror image into a task queue; and binding the scheduling task with the adaptation node.
In the embodiment of the application, if appropriate nodes (i.e., runnable nodes) do not exist in the current kubernets cluster to deploy the container mirror by screening the adaptation nodes, the scheduling task of the container mirror is moved to the tail of the task queue. The Kubernetes sequentially processes the scheduling tasks in the task queue from beginning to end. Further, in order to quickly determine the adapter node when the deployment task of the container mirror image is processed next time, before the deployment task of the container mirror image is moved to the tail of the task queue, mark information such as the name or ID of the adapter node corresponding to the container mirror image in the current kubernets cluster is marked in the deployment task of the container mirror image, so as to be directly used next time.
In the embodiment of the application, a CPU architecture and an operating system adapted to a container mirror image are determined according to a configuration file of the container mirror image; then, according to the labels of the nodes in the Kubernetes cluster, the nodes in the Kubernetes cluster are screened, and the CPU architecture and the nodes corresponding to the operating system which are adaptive to the container mirror image are used as adaptive nodes; and finally, selecting a scheduling node from the adaptation nodes in the Kubernetes cluster, and deploying the container mirror image on the scheduling node. Therefore, intelligent scheduling of nodes in the Kubernetes cluster with multiple operating systems and heterogeneous architectures is achieved, the container mirror image is conveniently and efficiently deployed on the correct nodes in the Kubernetes cluster, normal operation of the container mirror image after deployment can be guaranteed hundreds of percent, and the possibility of failure in container mirror image deployment is effectively avoided.
In addition, when version iteration occurs to the container mirror image, and the supported CPU architecture and operating system are changed, the embodiment of the application can automatically screen out correct and proper nodes according to the CPU architecture and the operating system supported by the new version of the container mirror image, and ensures that the application is deployed on the correct nodes.
The Kubernetes cluster-based node intelligent scheduling method provided by the embodiment of the application can ensure that a container mirror image can be conveniently and efficiently deployed on a correct node in a cluster in a Kubernetes-based mixed cluster with a plurality of operating systems and heterogeneous architectures, actively cope with the situation that a plurality of architectures and a plurality of operating systems coexist due to the localization trend of the CPU architecture and the operating systems, and prepare for the mass appearance of the localization CPU architecture and the operating systems.
In the container mirror image, the number of the container mirror images supporting the Linux system is far larger than that of the container mirror images supporting the Windows system, so that the number of nodes of the Linux system in the market is far higher than that of the nodes of the Windows system, but a large number of applications supporting the Windows system but not subjected to containerization exist at present, once the applications supporting the Windows system are subjected to containerization treatment in a large batch, corresponding container mirror images are generated, the container mirror image ecology of the Windows system is constructed, the nodes of the Windows system are greatly increased, and a large number of mixed clusters of multiple operating systems and heterogeneous architectures inevitably appear.
Fig. 6 is a schematic structural diagram of a kubernets cluster-based node intelligent scheduling system according to some embodiments of the present application; fig. 7 is a schematic diagram of the operation of a kubernets cluster-based node intelligent scheduling system according to some embodiments of the present application; as shown in fig. 6 and 7, the intelligent node scheduling system based on the ubernets cluster includes: a determination module 601, an acquisition module 602, a screening module 603, and a selection and deployment module 604. The determining module 601 is configured to determine a CPU architecture and an operating system adapted to the container image according to the configuration file of the container image; the obtaining module 602 is configured to obtain a label of a node in a Kubernetes cluster; the label is used for identifying a CPU architecture and an operating system corresponding to the node; the screening module 603 is configured to screen nodes in the kubernets cluster according to the node labels to obtain the adaptation nodes; the CPU architecture and the operating system corresponding to the adaptation node are adapted to the container mirror image; the selection and deployment module 604 is configured to select a scheduling node from the adaptation nodes and deploy the container image on the scheduling node.
The Kubernets cluster comprises a plurality of nodes, and can be divided into a control node and a working node according to different functions, wherein the control node is provided with a scheduler component, an API-Server component and an ETCD component, the working node is provided with a Pod (container group) and a kubel component, and the Kubernets cluster automatically generates a label corresponding to each node managed by the Kubernets cluster and is used for identifying an operating system and a CPU framework of the node. The scheduler component is used for distributing nodes in the cluster for the Pod, the API-Server component is used for distributing Kubernets cluster resources, the ETCD component is used for recording data information of the Kubernets cluster, and the kubel component is used for managing the Pod.
The embodiment of the application expands the functions of the scheduler component to form an intelligent scheduler, and the determining module 601 in the intelligent scheduler analyzes the yaml configuration file of the container mirror image to obtain the address of the mirror image warehouse, accesses the mirror image warehouse, obtains the metadata information of the container mirror image in the mirror image warehouse, and further automatically determines the CPU architecture and the operating system supported by the container mirror image.
The screening module 603 in the intelligent scheduler screens out the nodes (adaptation nodes) of the CPU architecture and the operating system adapting to the container image from the kubernets cluster according to the sequence of the ordering results of different CPU architectures and operating systems.
The selection and deployment module 604 in the intelligent scheduler screens the operable nodes of the container mirror image from the adaptation nodes according to the resource requirements required by the operation of the container mirror image, scores the operable nodes, determines the schedulable nodes according to the scoring result, binds the schedulable nodes with a container group (Pod), monitors the API-Server for a long time by the kubel running on the working node, and calls the node resource operation container to complete the deployment of the container mirror image on the schedulable nodes if the binding of the container group with the node is monitored.
The embodiment of the application expands the functions of the scheduler component to form the intelligent scheduler to realize the functions, does not change the conventional node scheduling mechanism of the Kubernetes cluster too much, and has low difficulty in realizing the technical scheme.
FIG. 8 is a schematic block diagram of a determination module provided in accordance with some embodiments of the present application; as shown in fig. 8, the determining module 601 includes: the parsing submodule 611 is configured to parse the configuration file of the container mirror image to obtain a mirror image warehouse address of the container mirror image; an access submodule 621 configured to access a mirror repository of the container mirror; the mirror image warehouse of the container mirror image stores metadata information of the container mirror image; the validation sub-module 631 is configured to determine the CPU architecture and operating system adapted to the container image according to the metadata information of the container image.
In a specific example, the image repository of the container image provides services to the outside through an open interface specification, and correspondingly, the accessing sub-module 621 is further configured to request, based on the open interface specification, an instruction to access the metadata information storage file of the container image in the image repository of the container image through a hypertext transfer protocol.
FIG. 9 is a schematic structural diagram of a screening module provided in accordance with some embodiments of the present application; as shown in fig. 9, the screening module 603 includes: a sorting submodule 613 configured to sort the CPU architecture and the operating system according to preset rules, respectively; the first screening submodule 623 is configured to screen the nodes in the Kubernetes cluster according to the sequence of the sequencing result and the type of the corresponding CPU architecture and operating system, so as to obtain the adaptation nodes.
In some optional embodiments, the screening module 603 further comprises: the aborting submodule 633 is configured to suspend the container mirror image from being deployed in the kubernets cluster when no adaptation node exists in the kubernets cluster; the update submodule 643 is configured to bring a new node corresponding to the CPU architecture and the operating system adapted to the container image into the kubernets cluster, and use the new node as an adaptation node.
FIG. 10 is a block diagram of a selection and deployment module provided in accordance with some embodiments of the present application; as shown in fig. 10, the selection and deployment module 604 includes: a resource setting submodule 614 configured to set resource requirements required for container mirroring; a second screening submodule 624 configured to screen the adaptation nodes according to resource requirements required for container mirror image operation to determine operable nodes; the scoring submodule 634 scores each operable node, and determines a scheduling node according to a scoring result; a deployment submodule 644 configured to deploy the container image on the scheduling node.
In some optional embodiments, the resource setting sub-module 614 is further configured to create a container group in which the containers are mirrored; and setting resource requirements in the configuration file of the container group as resource requirements required by the container mirror image operation.
In some optional embodiments, the deployment submodule 644 is further configured to write the label information of the scheduling node in the configuration file of the container group; and deploying the container group on the scheduling node according to the marking information of the scheduling node.
In some optional embodiments, the second filtering sub-module 624 is further configured to add the scheduled task of the container image to the task queue when there is no runnable node in the kubernets cluster; and binding the scheduling task with the adaptation node.
The node intelligent scheduling system based on the Kubernetes cluster provided in the embodiment of the present application can implement the steps and processes of any one of the above node intelligent scheduling method embodiments based on the Kubernetes cluster, and achieve the same beneficial effects, which are not described herein in detail.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A node intelligent scheduling method based on a Kubernetes cluster is characterized by comprising the following steps:
determining a CPU architecture and an operating system adapted to the container mirror image according to the configuration file of the container mirror image;
acquiring labels of nodes in a Kubernetes cluster; the label is used for identifying a CPU architecture and an operating system corresponding to the node;
screening the nodes in the Kubernetes cluster according to the label to obtain an adaptive node; the CPU architecture and the operating system corresponding to the adaptation node are adapted to the container mirror image;
selecting a scheduling node from the adaptation nodes and deploying the container image on the scheduling node.
2. The Kubernetes cluster-based intelligent node scheduling method according to claim 1, wherein the determining the CPU architecture and the operating system adapted to the container image according to the configuration file of the container image comprises:
analyzing the configuration file of the container mirror image to obtain a mirror image warehouse address of the container mirror image;
accessing a mirror repository of the container mirror; the mirror image warehouse of the container mirror image stores metadata information of the container mirror image;
and determining the CPU architecture and the operating system adapted to the container mirror image according to the metadata information of the container mirror image.
3. The Kubernetes cluster-based intelligent node scheduling method according to claim 2, wherein the mirror repository of the container mirror provides services to the outside through an open interface specification,
in a corresponding manner, the first and second optical fibers are,
the accessing a mirror repository of the container mirror comprises:
and based on the open interface specification, requesting an instruction to access the metadata information storage file of the container mirror in the mirror repository of the container mirror through a hypertext transfer protocol.
4. The Kubernets cluster-based intelligent node scheduling method according to any one of claims 1-3, wherein the screening the nodes in the Kubernets cluster according to the label to obtain the adapted nodes comprises:
respectively sequencing the CPU architecture and the operating system according to a preset rule;
and according to the sequence of the sequencing result, screening the nodes in the Kubernetes cluster according to the corresponding CPU architecture and the type of the operating system so as to obtain the adaptive nodes.
5. The Kubernets cluster-based intelligent node scheduling method according to claim 4, wherein the screening of the nodes in the Kubernets cluster according to the label to obtain the adapted node further comprises:
when the adaptation node does not exist in the Kubernets cluster, suspending the container mirror image from being deployed in the Kubernets cluster;
and incorporating the new node corresponding to the CPU architecture and the operating system adapted to the container mirror image into the Kubernetes cluster as the adaptation node.
6. The Kubernetes cluster-based intelligent node scheduling method according to claim 1, wherein the selecting a scheduling node from the adaptation nodes and deploying the container mirror on the scheduling node comprises:
setting resource requirements required by the container mirror image operation;
screening the adaptive nodes according to the resource requirement required by the container mirror image operation to determine operable nodes;
scoring each operable node, and determining the scheduling node according to a scoring result;
deploying the container image on the scheduling node.
7. The Kubernetes cluster-based intelligent node scheduling method according to claim 6, wherein the setting of the resource requirement required for the container mirroring operation comprises:
creating a container group of the container mirror operation;
and setting resource requirements in the configuration file of the container group as the resource requirements required by the container mirror image operation.
8. The Kubernetes cluster-based intelligent node scheduling method according to claim 7, wherein said deploying said container mirror on said scheduling node comprises:
writing the mark information of the scheduling node in the configuration file of the container group;
and deploying the container group on the scheduling node according to the marking information of the scheduling node.
9. The Kubernetes cluster-based intelligent node scheduling method according to any one of claims 6-8, wherein the screening of the adapted nodes to determine operable nodes according to resource requirements required for the container mirroring operation further comprises:
when the runnable node does not exist in the Kubernetes cluster, adding a scheduling task of the container mirror image into a task queue;
and binding the scheduling task with the adaptation node.
10. A node intelligent scheduling system based on a Kubernetes cluster is characterized by comprising:
the determining module is configured to determine the CPU architecture and the operating system adapted to the container mirror image according to the configuration file of the container mirror image;
the acquisition module is configured to acquire labels of nodes in a Kubernetes cluster; the label is used for identifying a CPU architecture and an operating system corresponding to the node;
the screening module is configured to screen the nodes in the Kubernetes cluster according to the node labels to obtain adaptive nodes; the CPU architecture and the operating system corresponding to the adaptation node are adapted to the container mirror image;
a selection and deployment module configured to select scheduling nodes from the adaptation nodes and deploy the container image on the scheduling nodes.
CN202110915567.1A 2021-08-10 2021-08-10 Intelligent node scheduling method and system based on Kubernetes cluster Active CN113645300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110915567.1A CN113645300B (en) 2021-08-10 2021-08-10 Intelligent node scheduling method and system based on Kubernetes cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110915567.1A CN113645300B (en) 2021-08-10 2021-08-10 Intelligent node scheduling method and system based on Kubernetes cluster

Publications (2)

Publication Number Publication Date
CN113645300A true CN113645300A (en) 2021-11-12
CN113645300B CN113645300B (en) 2023-11-28

Family

ID=78420590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110915567.1A Active CN113645300B (en) 2021-08-10 2021-08-10 Intelligent node scheduling method and system based on Kubernetes cluster

Country Status (1)

Country Link
CN (1) CN113645300B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115454580A (en) * 2022-11-10 2022-12-09 统信软件技术有限公司 Node host resource management method and device and computing equipment
CN115562843A (en) * 2022-12-06 2023-01-03 苏州浪潮智能科技有限公司 Container cluster computational power scheduling method and related device
CN116016438A (en) * 2022-12-12 2023-04-25 上海道客网络科技有限公司 Method and system for uniformly distributing IP addresses by multiple subnets based on container cloud platform
CN116155681A (en) * 2022-12-23 2023-05-23 博上(山东)网络科技有限公司 Terminal management and control method and system for Internet of things
CN116546439A (en) * 2022-01-26 2023-08-04 汉朔科技股份有限公司 Method and system for quickly waking up electronic shelf label and sending group message
CN117349035A (en) * 2023-12-05 2024-01-05 中电云计算技术有限公司 Workload scheduling method, device, equipment and storage medium
CN117369952A (en) * 2023-12-08 2024-01-09 中电云计算技术有限公司 Cluster processing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867955A (en) * 2015-09-18 2016-08-17 乐视云计算有限公司 Deployment system and deployment method of application program
CN106878385A (en) * 2016-12-30 2017-06-20 新华三技术有限公司 Private clound dispositions method and device
CN110221915A (en) * 2019-05-21 2019-09-10 新华三大数据技术有限公司 Node scheduling method and apparatus
CN110275761A (en) * 2018-03-16 2019-09-24 华为技术有限公司 Dispatching method, device and host node
CN111045786A (en) * 2019-11-28 2020-04-21 北京大学 Container creation system and method based on mirror image layering technology in cloud environment
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN112650478A (en) * 2021-01-04 2021-04-13 中车青岛四方车辆研究所有限公司 Dynamic construction method, system and equipment for embedded software development platform
CN112835714A (en) * 2021-01-29 2021-05-25 中国人民解放军国防科技大学 Container arrangement method, system and medium for CPU heterogeneous cluster in cloud edge environment
CN112965819A (en) * 2021-03-04 2021-06-15 山东英信计算机技术有限公司 Method and device for mixed scheduling of container resources across processor architectures

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105867955A (en) * 2015-09-18 2016-08-17 乐视云计算有限公司 Deployment system and deployment method of application program
CN106878385A (en) * 2016-12-30 2017-06-20 新华三技术有限公司 Private clound dispositions method and device
CN110275761A (en) * 2018-03-16 2019-09-24 华为技术有限公司 Dispatching method, device and host node
CN110221915A (en) * 2019-05-21 2019-09-10 新华三大数据技术有限公司 Node scheduling method and apparatus
CN111045786A (en) * 2019-11-28 2020-04-21 北京大学 Container creation system and method based on mirror image layering technology in cloud environment
CN111522639A (en) * 2020-04-16 2020-08-11 南京邮电大学 Multidimensional resource scheduling method under Kubernetes cluster architecture system
CN112650478A (en) * 2021-01-04 2021-04-13 中车青岛四方车辆研究所有限公司 Dynamic construction method, system and equipment for embedded software development platform
CN112835714A (en) * 2021-01-29 2021-05-25 中国人民解放军国防科技大学 Container arrangement method, system and medium for CPU heterogeneous cluster in cloud edge environment
CN112965819A (en) * 2021-03-04 2021-06-15 山东英信计算机技术有限公司 Method and device for mixed scheduling of container resources across processor architectures

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116546439A (en) * 2022-01-26 2023-08-04 汉朔科技股份有限公司 Method and system for quickly waking up electronic shelf label and sending group message
CN116546439B (en) * 2022-01-26 2024-01-30 汉朔科技股份有限公司 Method and system for quickly waking up electronic shelf label and sending group message
CN115454580A (en) * 2022-11-10 2022-12-09 统信软件技术有限公司 Node host resource management method and device and computing equipment
CN115562843A (en) * 2022-12-06 2023-01-03 苏州浪潮智能科技有限公司 Container cluster computational power scheduling method and related device
CN116016438A (en) * 2022-12-12 2023-04-25 上海道客网络科技有限公司 Method and system for uniformly distributing IP addresses by multiple subnets based on container cloud platform
CN116016438B (en) * 2022-12-12 2023-08-15 上海道客网络科技有限公司 Method and system for uniformly distributing IP addresses by multiple subnets based on container cloud platform
CN116155681A (en) * 2022-12-23 2023-05-23 博上(山东)网络科技有限公司 Terminal management and control method and system for Internet of things
CN116155681B (en) * 2022-12-23 2024-03-26 博上(山东)网络科技有限公司 Terminal management and control method and system for Internet of things
CN117349035A (en) * 2023-12-05 2024-01-05 中电云计算技术有限公司 Workload scheduling method, device, equipment and storage medium
CN117349035B (en) * 2023-12-05 2024-03-15 中电云计算技术有限公司 Workload scheduling method, device, equipment and storage medium
CN117369952A (en) * 2023-12-08 2024-01-09 中电云计算技术有限公司 Cluster processing method, device, equipment and storage medium
CN117369952B (en) * 2023-12-08 2024-03-15 中电云计算技术有限公司 Cluster processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113645300B (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN113645300B (en) Intelligent node scheduling method and system based on Kubernetes cluster
US11625274B1 (en) Hyper-convergence with scheduler extensions for software-defined container storage solutions
US7861246B2 (en) Job-centric scheduling in a grid environment
CN109885316B (en) Hdfs-hbase deployment method and device based on kubernetes
US9886260B2 (en) Managing software version upgrades in a multiple computer system environment
US9438665B1 (en) Scheduling and tracking control plane operations for distributed storage systems
US11334372B2 (en) Distributed job manager for stateful microservices
CN1645330A (en) Method and system for grid-enabled virtual machines with distributed management of applications
US7721289B2 (en) System and method for dynamic allocation of computers in response to requests
CN1906580A (en) Method and system for a grid-enabled virtual machine with movable objects
CN103049334A (en) Task processing method and virtual machine
CN101159596B (en) Method and apparatus for deploying servers
CN112835714B (en) Container arrangement method, system and medium for CPU heterogeneous clusters in cloud edge environment
CN111045786B (en) Container creation system and method based on mirror image layering technology in cloud environment
Ibryam et al. Kubernetes patterns
CN112433823A (en) Apparatus and method for dynamically virtualizing physical card
CN102790788A (en) Grid resource management system
US11630834B2 (en) Label-based data representation I/O process and system
CN113687935A (en) Cloud native storage scheduling mode based on super-fusion design
Hanif et al. Jargon of Hadoop MapReduce scheduling techniques: a scientific categorization
Chandra Effective memory utilization using custom scheduler in kubernetes
US20230305876A1 (en) Managing storage domains, service tiers, and failed servers
woon Ahn et al. Mirra: Rule-based resource management for heterogeneous real-time applications running in cloud computing infrastructures
Xiang et al. Gödel: Unified Large-Scale Resource Management and Scheduling at ByteDance
Yilmaz Scheduling Extensions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant