CN114625535A - Service deployment method and device for multiple Kubernetes clusters - Google Patents

Service deployment method and device for multiple Kubernetes clusters Download PDF

Info

Publication number
CN114625535A
CN114625535A CN202210227920.1A CN202210227920A CN114625535A CN 114625535 A CN114625535 A CN 114625535A CN 202210227920 A CN202210227920 A CN 202210227920A CN 114625535 A CN114625535 A CN 114625535A
Authority
CN
China
Prior art keywords
target
cluster
kubernets
service
service resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210227920.1A
Other languages
Chinese (zh)
Inventor
贾永鹏
揭震
马超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sina Technology China Co Ltd
Original Assignee
Sina Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sina Technology China Co Ltd filed Critical Sina Technology China Co Ltd
Priority to CN202210227920.1A priority Critical patent/CN114625535A/en
Publication of CN114625535A publication Critical patent/CN114625535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The embodiment of the application provides a service deployment method and a device for multiple Kubernets clusters, and the method comprises the following steps: selecting a main Kubernetes cluster from a plurality of Kubernetes clusters; determining a target service resource pre-configured by a user and a target namespace of a target Kubernetes cluster needing to deploy the target service resource through the main Kubernetes cluster; and deploying the target service resource to a target namespace of the target Kubernetes cluster at one time through the main Kubernetes cluster.

Description

Service deployment method and device for multiple Kubernetes clusters
Technical Field
The invention relates to the technical field of cloud services, in particular to a service deployment method and device for multiple Kubernetes clusters.
Background
Kubernetes is a container orchestration engine that supports automated deployment, large-scale scalable, application containerization management.
In some scenarios, in an environment with multiple namespaces of multiple kubernets clusters, the same service resources need to be deployed on the multiple namespaces of the multiple kubernets clusters, and a cluster administrator needs to deploy the service resources in each namespace of the multiple kubernets clusters one by one, so that a large amount of repeated work is generated, and the efficiency of service deployment is too low.
Disclosure of Invention
The embodiment of the application aims to provide a service deployment method and device for multiple Kubernets clusters so as to solve the problem of low service deployment efficiency.
In order to solve the above technical problem, the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a service deployment method for multiple kubernets clusters, where the method includes: selecting a main Kubernetes cluster from a plurality of Kubernetes clusters; determining a target service resource pre-configured by a user and a target namespace of a target Kubernetes cluster needing to deploy the target service resource through the main Kubernetes cluster; and deploying the target service resource to a target namespace of the target Kubernetes cluster at one time through the main Kubernetes cluster.
In a second aspect, an embodiment of the present application provides a service deployment apparatus with multiple kubernets, where the apparatus includes: the selecting module is used for selecting a main Kubernets cluster from the plurality of Kubernets clusters; the determining module is used for determining a target service resource pre-configured by a user and a target namespace of the target Kubernets cluster needing to deploy the target service resource through the main Kubernets cluster; and the deployment module is used for deploying the target service resource to a target namespace of the target Kubernets cluster at one time through the main Kubernets cluster.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus; the processor, the communication interface and the memory complete mutual communication through a communication bus; the memory is used for storing a computer program; the processor is configured to execute the program stored in the memory, and implement the service deployment method steps of the multiple kubernets clusters according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the service deployment method for multiple kubernets clusters according to the first aspect are implemented.
According to the technical scheme provided by the embodiment of the application, the main Kubernets cluster is selected from the plurality of Kubernets clusters, the target service resource pre-configured by a user and the target name space of the target Kubernets cluster needing target service resource deployment are determined through the main Kubernets cluster, and the target service resource is deployed to the target name space of the target Kubernets cluster at one time through the main Kubernets cluster. The user can pre-configure the target service resource and the target name space of the target Kubernetes cluster which needs to deploy the target service resource, and then the main Kubernetes cluster deploys the target service resource to the target name space of the target Kubernetes cluster at one time without deploying the service resource one by cluster management personnel, thereby avoiding generating a large amount of repeated work and improving the efficiency of service deployment.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of a service deployment method of multiple kubernets clusters according to an embodiment of the present application;
fig. 2 is a second flowchart of a service deployment method of multiple kubernets clusters according to the embodiment of the present application;
fig. 3 is a third flowchart illustrating a service deployment method of multiple kubernets clusters according to the embodiment of the present application;
fig. 4 is a schematic diagram illustrating a module composition of a service deployment apparatus with multiple kubernets clusters according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a service deployment method and device for multiple Kubernets clusters and electronic equipment, and solves the problem of low service deployment efficiency.
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
As shown in fig. 1, an execution main body of the method may be a server, where the server may be an independent server or a server cluster composed of multiple servers. The service deployment method for multiple kubernets clusters specifically comprises the following steps of S101 to S105:
in step S101, a master kubernets cluster is selected from a plurality of kubernets clusters.
Specifically, any one of the plurality of kubernets may be selected as a master kubernets cluster, where the master kubernets cluster is used to deploy a service resource deployment module and operate a coordinator, the service resource deployment module is used to deploy resources into different namespaces of a target kubernets cluster, and the coordinator is used to coordinate distribution of multi-cluster multi-namespace resources, creation, modification, deletion of clusters, and the like.
Further, deploying a service resource deployment module on the main kubernets cluster and operating the coordinator, which need to perform Role-Based Access Control (RBAC) on the main kubernets cluster, specifically, creating three resources, namely, servicecount, clusterroll, and clusterrollingbinding, for authenticating resource Access on the main kubernets cluster, thereby ensuring the security of resources on the main kubernets cluster.
Further, in order to facilitate server call, a Service resource deployment module is deployed on the main kubernets cluster in operation, after the Service resource deployment module is deployed, a client method for resource distribution can be generated by using software of a code-generator, and the client method is added to an interface Service (API Service) of the main kubernets cluster for server call, and the interface Service (API Service) is used for unique entry of the kubernets cluster for internal and external services of the main kubernets cluster.
In step S103, a target service resource pre-configured by the user and a target namespace of the target kubernets cluster in which the target service resource needs to be deployed are determined by the master kubernets cluster.
Specifically, a user may operate, view, and manage a main kubernets cluster through a World Wide Web (Web) front end, the kubernets management module may provide various interfaces for operating the main kubernets cluster through an APIServer to call the Web front end, the cluster management module provides an interface for operating and viewing cluster management for the Web front end, and the user may manage the cluster through the cluster management module, for example, add the cluster, generate an ID of the cluster, and the like. The user can configure the configuration (SyncTargetRef) of the target service resource to be distributed and the configuration (ClusterSpec) of the destination of the distributed resource in advance through the Web front end. The configuration of the target business resource includes, but is not limited to, a type of resource (kidd), a version of resource (ApiVersion), and a Name of resource (Name), wherein the resource may configure at least one. The configuration of the destination of the distributed resource (target Namespace of the target kubernets cluster) includes a cluster name (ClusterName), a cluster tag, a Namespace (Namespace) in the cluster, and the like, where the cluster tag is used to identify the identity of the kubernets cluster, each kubernets cluster may have a unique corresponding tag, the tag may be a sequence number, an ID number (a unique ID may be generated for the cluster by a cluster management module), a custom literal tag, and the like, and the Namespace in the cluster may be configured with the ID, the name, and the like, where the number of the target kubernets may be multiple and the number of the target namespaces may also be multiple. After the user configures the target name space of the target service resource and the target Kubernetes cluster, the target name space of the target service resource and the target Kubernetes cluster is sent to the main Kubernetes cluster and stored in the database for backup.
After determining a target service resource configured by a user in advance, determining a Kubernets cluster and a name space which are configured by the user and correspond to the target service resource and need to be deployed, for example, the target service resource which is configured by the user and needs to be distributed is ' ConfigMap ' and ' Secret ', determining that the two resources are under the name space of ' default ' with the cluster name of ' cluster-a ', the target cluster which is configured by the user and needs to be deployed is ' cluster-b ' and ' cluster-c ', then ' cluster-b ' and ' cluster-c ' are target Kubernets, and the target name space which is configured by the user and needs to be deployed by the resource is a name space with ' app ': book ' and ' post ' labels in the ' cluster-b '; the cluster "cluster-c" has a namespace with an "app": foo "tag.
Furthermore, a namespace Selector (Selector) can be used for configuring which namespaces need to be synchronized under a target cluster, and a target namespace is positioned in a namespace label mode. For example, if the namespace selector is configured as "app: foo", then the namespace having the key of "app" on the namespace and the value of "foo" label is found in the corresponding destination cluster as the target namespace.
It should be noted that after the target service resource on the kubernets cluster sends a change, the cluster management module generates a message, the message can store and transfer events through the message queue middleware, and the coordinator consumes the message in the message queue middleware and performs different operations on different messages. For example, after a kubernets cluster is newly added, the coordinator needs to be restarted to reload configuration information, the cluster management module generates an event for restarting the coordinator, and the coordinator restarts its service after consuming the message, thereby achieving the purpose of updating the cluster configuration.
In step S105, the target traffic resource is deployed to the target namespace of the target kubernets cluster at once by the master kubernets cluster.
Specifically, a service resource deployment module is deployed in the main Kubernetes cluster, and a target service resource is deployed to a target namespace of the target Kubernetes cluster at one time through the service resource deployment module in the main Kubernetes cluster. Under the condition that the target service resources comprise a plurality of resources and the target Kubernetes cluster and the target name space are both a plurality of resources, the plurality of resources are simultaneously deployed to the plurality of target name spaces of the plurality of Kubernetes clusters at one time through the service resource deployment module in the main Kubernetes cluster. I.e. the target traffic resources deployed to multiple target kubernets clusters and multiple target namespaces may be the same.
For example, a user configures target service resources to be distributed as "ConfigMap" and "Secret", determines that the two resources are under a namespace of "default" with a cluster name of "cluster-a", the target clusters to be deployed configured by the user are "cluster-b" and "cluster-c" clusters, and then the "cluster-b" and "cluster-c" clusters are target kubernets clusters, and the target namespace to be deployed by the user is a namespace with "app": book "and" app ": test" tags in the "cluster-b cluster"; a namespace with an "app": foo "label is arranged in the cluster of the" cluster-c ", and a service resource deployment module in the main Kubernets cluster deploys two resources, namely the" ConfigMap "and the" Secret ", into a namespace with an" app ": book" and an "app": test "label in the cluster of the" cluster-b "and a namespace with an" app ": foo" label in the cluster of the "cluster-c" at the same time.
Further, after the target service resource is deployed to the target namespace of the target Kubernetes cluster at one time, the distribution state of the resource corresponding to the target namespace under the target Kubernetes cluster can be displayed according to the cluster dimension, and the distribution state includes but is not limited to: the Type of resource (Type), the distribution Status of the resource (Status), the last update time of the resource (LastUpdateTime), the last time a distribution state transitions to another state (LastTransitionTime), the Reason for distribution (Reason), and the content of the resource (Message).
According to the technical scheme provided by the embodiment of the application, a user can pre-configure the target service resources and the target name space of the target Kubernets cluster which needs to be deployed with the target service resources, and then the main Kubernets cluster deploys the target service resources into the target name space of the target Kubernets cluster at one time, so that cluster management personnel do not need to deploy the service resources one by one, a large amount of repeated work is avoided, and the efficiency of service deployment is improved.
In a possible implementation manner, after the target service resource is once deployed to the target namespace of the target kubernets cluster through the master kubernets cluster, the service deployment method further includes: and writing the target service resource into the database, and reading the target service resource from the database and recovering to the main Kubernet cluster under the condition that the target service resource in the main Kubernet cluster is unavailable. Specifically, the database stores information such as the configuration (SyncTargetRef) of the target service resource and the configuration (ClusterSpec) of the destination of the distributed resource, the target service resource is written into the database for data backup, and once the target service resource in the main kubernets cluster is locked or tampered or has no authority to access, the target service resource can be read from the database and restored to the main kubernets cluster, so that the reliability of the main kubernets cluster as the main body of resource deployment is improved, and the reliability of the target service resource is ensured. In addition, under the condition that the target service resource in the target name space is changed, the main Kubernets cluster can be ensured to timely redeploy the target service resource in the target name space, and the consistency of the target service resource in the target name space and the target service resource pre-configured by a user is ensured.
In one possible implementation, after the target service resource is once deployed to the target namespace of the target kubernets cluster through the master kubernets cluster, the method further includes: and under the condition of receiving the deleting instruction, deleting the target service resources deployed by the service resource deployment module while deleting the service resource deployment module. That is to say, the user can also configure the option of "cascade deletion" through the Web front end, and if the user selects the option of "cascade deletion", the service resource deployment module is deleted from the main kubernets cluster, and at the same time, all resources deployed by the service resource deployment module on the target kubernets cluster are cleaned, so that the problem that the cluster space occupancy rate is too high due to too many resources deployed on the target kubernets cluster is avoided, and the stability and the space utilization rate of the cluster are improved. If the user does not select the option of 'cascade deletion', when the service resource deployment module is deleted from the main Kubernets cluster, the resources deployed by the service resource deployment module on the target Kubernets cluster are reserved.
As shown in fig. 2, an execution main body of the method may be a server, where the server may be an independent server, or a server cluster composed of multiple servers. The service deployment method of multiple kubernets clusters may specifically include the following steps S201 to S207:
in step S201, a master kubernets cluster is selected from a plurality of kubernets clusters.
In step S203, a target service resource pre-configured by the user and a target namespace of the target kubernets cluster in which the target service resource needs to be deployed are determined by the master kubernets cluster.
In step S205, the target traffic resource is deployed to the target namespace of the target kubernets cluster at once by the master kubernets cluster.
In step S207, the target namespace is monitored, and when the target service resource in the target namespace changes, the target service resource pre-configured by the user is deployed in the target namespace again through the master kubernets cluster.
Specifically, after a target service resource is deployed on a target kubernets cluster, the deployment of the target service resource on the target kubernets cluster may be modified manually, or the host kubernets cluster modifies coordinately, which causes that a resource deployed on a target namespace on the target kubernets cluster is inconsistent with a target service resource configured by a user, and in the case of inconsistency, the coordinator may call an API Server to re-deploy a target service resource pre-configured by the user in the target namespace, so that the target service resource in the target namespace is always consistent with the target service resource configured by the user, and therefore, when the target service resource in the target namespace changes, only the target service resource in the target namespace can be modified, i.e., re-deployed, to achieve the purpose of uniform configuration and consistency. Further, when the system is redeployed, the same target service resource in different target namespaces on different target Kubernets clusters can be redeployed at one time and at the same time, so that the consistency and the efficient modification of resource deployment are ensured.
Further, re-deploying, by the master kubernets cluster, the target service resource pre-configured by the user in the target namespace includes: adding a first event that a target service resource in a target namespace changes into a speed limit queue, after consuming the first event in the speed limit queue through a target coordinator deployed on the main Kubernets cluster, deploying the target service resource pre-configured by the user in the target namespace again through the main Kubernets cluster, wherein the consumption frequency of the speed limit queue is lower than a threshold value. The target coordinator is an activated coordinator which establishes heartbeat with the ConfigMap, the ConfigMap is established through a ConfigMap mechanism of the main Kubernetes cluster, a distributed resource lock is configured by using the ConfigMap, and the distributed resource lock only allows one coordinator to establish heartbeat with the ConfigMap.
Specifically, when a target service resource changes, an event is generated, which can be monitored by an event collector, the event collector mainly uses a kubernets Informer mechanism to monitor kubernets (such as a master kubernets cluster and a target kubernets cluster) and the target service resource, the event collector monitors the event and then joins in a speed limit queue to start a target coordinator, after the target coordinator is started, an authentication configuration file of the master kubernets cluster stored in a database is read to obtain the authority of operating the master kubernets cluster and the target kubernets cluster, after the authority is obtained, the target coordinator performs unified consumption, the target service resource pre-configured by the user is re-deployed in the target namespace through the master kubernets cluster, and the target service resource which is about to change is modified into the target service resource which needs to be deployed and configured by the user. For example, the ConfigMap of the resource disposed on Namespace a in Kubernetes Cluster B artificially modifies the configuration of the ConfigMap, an event is generated when the ConfigMap is modified, after monitoring the event, the event collector adds the event for modifying the ConfigMap into the speed limit queue, the target coordinator performs unified consumption, the target coordinator performs comparison with the ConfigMap resource configured by the user, and after the inconsistency is found, the target coordinator calls the API server to perform coordinated modification on the inconsistent ConfigMap, that is, redeployment, so that the ConfigMap in the target Namespace is consistent with the ConfigMap resource configured by the user.
The speed limit queue limits the consumption frequency of the target coordinator to a certain extent, the consumption frequency of the speed limit queue is lower than a threshold value, and the threshold value can be determined according to actual conditions, so that the target coordinator is prevented from consuming resources frequently, the coordinator is protected, the security and the reliability of the coordinator are high, and the stability and the reliability of the main Kubernetes cluster are improved.
Further, the distributed resource lock ensures that only one coordinator can normally operate on the main Kubernets cluster, avoids unnecessary repeated coordination and modification, and saves resources and space on the main Kubernets cluster.
In addition, under the condition that the target coordinator fails to establish heartbeat with the ConfigMap continuously, in order to ensure that service deployment can be carried out continuously and improve reliability of service deployment, other coordinators can seize the ConfigMap resource, a coordinator which can establish heartbeat with the ConfigMap successfully is selected as a new coordinator, and the new coordinator consumes the first event in the speed-limiting queue.
It should be noted that steps S201 to S205 have the same or similar implementations as those of steps S101 to S105, and may refer to each other, and the embodiments of the present application are not described herein again.
According to the technical scheme provided by the embodiment of the application, a user can pre-configure the target service resources and the target name space of the target Kubernets cluster which needs to be deployed with the target service resources, and then the main Kubernets cluster deploys the target service resources into the target name space of the target Kubernets cluster at one time, so that cluster management personnel do not need to deploy the service resources one by one, a large amount of repeated work is avoided, and the efficiency of service deployment is improved.
As shown in fig. 3, an execution main body of the method may be a server, where the server may be an independent server or a server cluster composed of multiple servers. The service deployment method of the multi-Kubernetes cluster specifically comprises the following steps S301-S307:
in step S301, a master kubernets cluster is selected from a plurality of kubernets clusters.
In step S303, a target service resource pre-configured by the user and a target namespace of the target kubernets cluster in which the target service resource needs to be deployed are determined by the master kubernets cluster.
In step S305, the target traffic resource is deployed to the target namespace of the target kubernets cluster at once by the master kubernets cluster.
In step S307, the master kubernets cluster is monitored, and when the master kubernets cluster executes a new event, the resources in the target namespace are checked, and if the resources in the target namespace do not match the target service resources, the target service resources are redeployed in the target namespace by the master kubernets cluster, or the resources in the target namespace are modified by the master kubernets cluster, and the new event includes modification, deletion, or addition of any resource in the target kubernets cluster.
Specifically, the event collector monitors a service resource deployment module deployed on a main kubernets cluster through a kubernets inform mechanism, when the service resource deployment module adds, deletes and modifies an event to any resource in a target kubernets cluster, the event collector places the event in a speed limit queue, the target coordinator performs unified processing, after the target coordinator receives the event, the target coordinator compares the resource deployed in a target name space of the target kubernets cluster with a target service resource configured by a user, if the target name space does not successfully deploy the target service resource, an API Server is called to enable the service resource deployment module of the main kubernets cluster to deploy the target service resource in the corresponding kubernets cluster, if the target service resource deployed in the target name space is inconsistent with the target service resource configured by the user, the API Server is called to enable the service resource deployment module of the main kubernets cluster to modify the target service resource deployed in the corresponding kubernets cluster, the consistency of final deployment is achieved.
It should be noted that steps S301 to S305 have the same or similar implementations as those of steps S101 to S105, and may refer to each other, and the embodiments of the present application are not described herein again.
According to the technical scheme provided by the embodiment of the application, a user can pre-configure the target service resource and the target name space of the target Kubernets cluster which needs to deploy the target service resource, and then the main Kubernets cluster deploys the target service resource to the target name space of the target Kubernets cluster at one time, so that cluster management personnel do not need to deploy the service resource one by one, a large amount of repeated work is avoided, and the service deployment efficiency is improved.
Corresponding to the service deployment method of multiple kubernets clusters provided in the foregoing embodiments, based on the same technical concept, an embodiment of the present application further provides a service deployment apparatus of multiple kubernets clusters, fig. 4 is a schematic diagram of modules of the service deployment apparatus of multiple kubernets clusters provided in the embodiment of the present application, where the service deployment apparatus of multiple kubernets is used to execute the service deployment method of multiple kubernets clusters described in fig. 1 to 3, and as shown in fig. 4, the service deployment apparatus 400 of multiple kubernets includes: a selecting module 401, configured to select a main kubernets cluster from multiple kubernets clusters; a determining module 402, configured to determine, through the master kubernets cluster, a target service resource pre-configured by a user and a target namespace of a target kubernets cluster in which the target service resource needs to be deployed; a deployment module 403, configured to deploy the target service resource to the target namespace of the target kubernets cluster at one time through the master kubernets cluster.
According to the technical scheme provided by the embodiment of the application, a user can pre-configure the target service resources and the target name space of the target Kubernets cluster which needs to be deployed with the target service resources, and then the main Kubernets cluster deploys the target service resources into the target name space of the target Kubernets cluster at one time, so that cluster management personnel do not need to deploy the service resources one by one, a large amount of repeated work is avoided, and the efficiency of service deployment is improved.
Optionally, the deploying module 403 is further configured to monitor the target namespace, and in a case that the target service resource in the target namespace changes, redeploy, by the master kubernets cluster, the target service resource configured in advance by the user in the target namespace.
Optionally, the deploying module 403 is further configured to add a first event that the target service resource in the target namespace changes to the speed limit queue, and after the first event in the speed limit queue is consumed by the target coordinator deployed on the master kubernets cluster, re-deploy, by the master kubernets cluster, the target service resource pre-configured by the user in the target namespace, where a consumption frequency of the speed limit queue is lower than a threshold. The target coordinator is an activated coordinator which establishes heartbeat with the ConfigMap, the ConfigMap is established through a ConfigMap mechanism of the main Kubernetes cluster, a distributed resource lock is configured by the ConfigMap, and the distributed resource lock only allows one coordinator to establish heartbeat with the ConfigMap.
Optionally, the deployment module 403 is further configured to monitor the master kubernets cluster, check the resource in the target namespace when the master kubernets cluster executes a new event, and if the resource in the target namespace does not match the target service resource, redeploy the target service resource in the target namespace by the master kubernets cluster, or modify the resource in the target namespace by the master kubernets cluster, where the new event includes modification, deletion, or addition of any resource in the target kubernets cluster.
Optionally, the deploying module 403 is further configured to deploy the target service resource to the target namespace of the target kubernets cluster at one time through the service resource deploying module in the master kubernets cluster.
Optionally, the deployment module 403 is further configured to, in a case that the deletion instruction is received, delete the target service resource deployed by the service resource deployment module while deleting the service resource deployment module.
Optionally, the deployment module 403 is further configured to write the target service resource into the database, and, in a case that the target service resource in the master kubernets cluster is unavailable, read the target service resource from the database and restore to the master kubernets cluster.
The service deployment device with multiple kubernets clusters provided in the embodiment of the present application can implement each process in the embodiment corresponding to the service deployment method with multiple kubernets clusters, and is not described here again to avoid repetition.
It should be noted that the service deployment apparatus for multiple kubernets cluster provided in the embodiment of the present application and the service deployment method for multiple kubernets cluster provided in the embodiment of the present application are based on the same inventive concept, and therefore specific implementation of the embodiment may refer to implementation of the service deployment method for multiple kubernets cluster, and repeated details are not described here.
Corresponding to the service deployment method of multiple kubernets clusters provided in the foregoing embodiment, based on the same technical concept, an embodiment of the present application further provides an electronic device, where the electronic device is configured to execute the service deployment method of multiple kubernets clusters, and fig. 5 is a schematic structural diagram of an electronic device implementing each embodiment of the present invention, as shown in fig. 5. Electronic devices may vary widely in configuration or performance and may include one or more processors 501 and memory 502, where the memory 502 may have one or more stored applications or data stored therein. Memory 502 may be, among other things, transient or persistent storage. The application program stored in memory 502 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for the electronic device. Still further, the processor 501 may be arranged in communication with the memory 502 to execute a series of computer-executable instructions in the memory 502 on the electronic device. The electronic device may also include one or more power supplies 503, one or more wired or wireless network interfaces 504, one or more input-output interfaces 505, one or more keyboards 506.
Specifically, in this embodiment, the electronic device includes a processor, a communication interface, a memory, and a communication bus; the processor, the communication interface and the memory complete mutual communication through a bus; a memory for storing a computer program; the processor is configured to execute the program stored in the memory to implement the steps in the above method embodiments, and has the beneficial effects of the above method embodiments, and details are not described here again to avoid repetition.
In this embodiment, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented, and the method has the beneficial effects that the method embodiments have, and are not described herein again to avoid repetition.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, an electronic device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (10)

1. A service deployment method of multiple Kubernetes clusters is characterized in that the service deployment method comprises the following steps:
selecting a main Kubernetes cluster from a plurality of Kubernetes clusters;
determining a target service resource pre-configured by a user and a target namespace of a target Kubernetes cluster needing to deploy the target service resource through the main Kubernetes cluster;
and deploying the target service resource to a target namespace of the target Kubernetes cluster at one time through the main Kubernetes cluster.
2. The service deployment method according to claim 1, wherein after said target service resource is deployed to the target namespace of the target kubernets cluster at one time by the master kubernets cluster, the service deployment method further comprises:
monitoring the target name space, and under the condition that the target service resource in the target name space is changed, deploying the target service resource configured by the user in advance in the target name space through the main Kubernetes cluster.
3. The service deployment method according to claim 2, wherein said re-deploying, by the master kubernets cluster, the target service resource pre-configured by the user in the target namespace comprises:
adding a first event that the target service resource in the target namespace changes into a speed limit queue, and after the first event in the speed limit queue is consumed through a target coordinator deployed on the main Kubernets cluster, deploying the target service resource pre-configured by the user in the target namespace again through the main Kubernets cluster, wherein the consumption frequency of the speed limit queue is lower than a threshold value.
4. The service deployment method according to claim 3, wherein the target coordinator is an activated coordinator that establishes a heartbeat with a ConfigMap, the ConfigMap is created by a ConfigMap mechanism of the master Kubernets cluster, and a distributed resource lock is configured by using the ConfigMap, and the distributed resource lock only allows one coordinator to establish a heartbeat with the ConfigMap.
5. The service deployment method according to claim 1, wherein after said target service resource is deployed to the target namespace of the target kubernets cluster at one time by the master kubernets cluster, the service deployment method further comprises:
monitoring the main Kubernets cluster, checking resources in the target namespace under the condition that the main Kubernets cluster executes a new event, if the resources in the target namespace are not matched with the target service resources, relocating the target service resources in the target namespace through the main Kubernets cluster, or modifying the resources in the target namespace through the main Kubernets cluster, wherein the new event comprises modification, deletion or addition of any resource in the target Kubernets cluster.
6. The service deployment method according to claim 1, wherein said deploying the target service resource to the target namespace of the target kubernets cluster at a time by the master kubernets cluster comprises:
deploying the target service resource to a target namespace of the target Kubernetes cluster at one time through a service resource deployment module in the main Kubernetes cluster;
after said deploying said target traffic resource once to a target namespace of said target kubernets cluster by said master kubernets cluster, said method further comprising:
and under the condition of receiving a deleting instruction, deleting the target service resources deployed by the service resource deployment module while deleting the service resource deployment module.
7. The service deployment method according to claim 1, wherein after said target service resource is deployed to the target namespace of the target kubernets cluster at one time by the master kubernets cluster, the service deployment method further comprises:
and writing the target service resource into a database, and reading the target service resource from the database and recovering the target service resource to the main Kubernets cluster under the condition that the target service resource in the main Kubernets cluster is unavailable.
8. A service deployment apparatus for multiple kubernets clusters, the apparatus comprising:
the selecting module is used for selecting a main Kubernets cluster from the plurality of Kubernets clusters;
the determining module is used for determining a target service resource pre-configured by a user and a target namespace of a target Kubernets cluster needing to deploy the target service resource through the main Kubernets cluster;
and the deployment module is used for deploying the target service resource to a target namespace of the target Kubernets cluster at one time through the main Kubernets cluster.
9. An electronic device comprising a processor, a communication interface, a memory, and a communication bus; the processor, the communication interface and the memory complete mutual communication through a communication bus; the memory is used for storing a computer program; the processor is configured to execute the program stored in the memory to implement the steps of the service deployment method of multiple kubernets clusters according to any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the service deployment method steps of a plurality of kubernets clusters as claimed in any one of claims 1-7.
CN202210227920.1A 2022-03-08 2022-03-08 Service deployment method and device for multiple Kubernetes clusters Pending CN114625535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210227920.1A CN114625535A (en) 2022-03-08 2022-03-08 Service deployment method and device for multiple Kubernetes clusters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210227920.1A CN114625535A (en) 2022-03-08 2022-03-08 Service deployment method and device for multiple Kubernetes clusters

Publications (1)

Publication Number Publication Date
CN114625535A true CN114625535A (en) 2022-06-14

Family

ID=81900939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210227920.1A Pending CN114625535A (en) 2022-03-08 2022-03-08 Service deployment method and device for multiple Kubernetes clusters

Country Status (1)

Country Link
CN (1) CN114625535A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115981847A (en) * 2022-12-14 2023-04-18 北京百度网讯科技有限公司 Service grid deployment method and device, electronic equipment and storage medium
CN117135050A (en) * 2023-10-26 2023-11-28 建信金融科技有限责任公司 Application deployment method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115981847A (en) * 2022-12-14 2023-04-18 北京百度网讯科技有限公司 Service grid deployment method and device, electronic equipment and storage medium
CN117135050A (en) * 2023-10-26 2023-11-28 建信金融科技有限责任公司 Application deployment method and device
CN117135050B (en) * 2023-10-26 2024-02-09 建信金融科技有限责任公司 Application deployment method and device

Similar Documents

Publication Publication Date Title
CN109189334B (en) Block chain network service platform, capacity expansion method thereof and storage medium
CN111338854B (en) Kubernetes cluster-based method and system for quickly recovering data
US10956143B2 (en) Server updates
CN113296792B (en) Storage method, device, equipment, storage medium and system
CN114625535A (en) Service deployment method and device for multiple Kubernetes clusters
US11385965B2 (en) Automatically setting a dynamic restore policy in a native cloud environment
US20160026506A1 (en) System and method for managing excessive distribution of memory
CN113849266A (en) Service deployment method and device for multiple Kubernetes clusters
CN109933338B (en) Block chain deployment method, device, computer equipment and storage medium
CN113626286A (en) Multi-cluster instance processing method and device, electronic equipment and storage medium
JP6663995B2 (en) System and method for backing up a large-scale distributed scale-out data system
CN114706690B (en) Method and system for sharing GPU (graphics processing Unit) by Kubernetes container
CN111045802B (en) Redis cluster component scheduling system and method and platform equipment
CN114528085A (en) Resource scheduling method, device, computer equipment, storage medium and program product
CN112882792A (en) Information loading method, computer device and storage medium
CN113377499B (en) Virtual machine management method, device, equipment and readable storage medium
CN113419813B (en) Method and device for deploying bare engine management service based on container platform
CN111831393A (en) Method for managing virtual machine, server and storage medium
WO2023160418A1 (en) Resource processing method and resource scheduling method
CN112181049A (en) Cluster time synchronization method, device, system, equipment and readable storage medium
CN115617459A (en) Method, device and equipment for resource scheduling
CN115080309A (en) Data backup system, method, storage medium, and electronic device
CN112488462A (en) Unified pushing method, device and medium for workflow data
CN116010111B (en) Cross-cluster resource scheduling method, system and terminal equipment
US20240005017A1 (en) Fencing off cluster services based on access keys for shared storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230313

Address after: Room 501-502, 5/F, Sina Headquarters Scientific Research Building, Block N-1 and N-2, Zhongguancun Software Park, Dongbei Wangxi Road, Haidian District, Beijing, 100193

Applicant after: Sina Technology (China) Co.,Ltd.

Address before: 100080 7th floor, Sina headquarters scientific research building, plot n-1 and n-2, Zhongguancun Software Park Phase II (West Expansion), Dongbeiwang West Road, Haidian District, Beijing

Applicant before: Sina.com Technology (China) Co.,Ltd.