WO2019184164A1 - Method for automatically deploying kubernetes worker node, device, terminal apparatus, and readable storage medium - Google Patents

Method for automatically deploying kubernetes worker node, device, terminal apparatus, and readable storage medium Download PDF

Info

Publication number
WO2019184164A1
WO2019184164A1 PCT/CN2018/097564 CN2018097564W WO2019184164A1 WO 2019184164 A1 WO2019184164 A1 WO 2019184164A1 CN 2018097564 W CN2018097564 W CN 2018097564W WO 2019184164 A1 WO2019184164 A1 WO 2019184164A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
deployed
kubernetes
identifier
slave
Prior art date
Application number
PCT/CN2018/097564
Other languages
French (fr)
Chinese (zh)
Inventor
刘俊杰
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019184164A1 publication Critical patent/WO2019184164A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Definitions

  • the present application belongs to the field of data processing technologies, and in particular, to a method, an apparatus, a terminal device, and a computer readable storage medium for automatically deploying a Kubernetes slave node.
  • Docker provides container technology that has become a research hotspot by dividing the resources managed by a single operating system into isolated groups and improving resource utilization.
  • This container technology allows several containers to run on the same host or virtual machine, each of which is a separate virtual environment or application.
  • Kubernetes as an open source container operation platform, can realize the functions of combining several containers into one service and dynamically allocating the host running the container, which provides great convenience for users to use the container.
  • the master node In the Kubernetes cluster, there are two types of nodes: the master node and the slave node.
  • the master node is responsible for the management and scheduling of resources in the Kubernetes cluster, while the slave node is responsible for hosting the container and is the host of the container.
  • the slave node In the prior art, if a new slave node is to be deployed in a Kubernetes cluster, a large number of components need to be manually installed on the corresponding host or virtual machine, and a large number of configurations are required. Therefore, the existing method of deploying the Kubernetes slave node mainly relies on manual operations, and the deployment process is cumbersome, and configuration errors are likely to occur during the deployment process, and the error rate is high.
  • the embodiment of the present application provides a method, an apparatus, a terminal device, and a computer readable storage medium for automatically deploying a Kubernetes slave node, so as to solve the problem that the Kubernetes slave node deployment is cumbersome and has a high error rate in the prior art. .
  • a first aspect of the embodiments of the present application provides a method for automatically deploying a Kubernetes slave node, including:
  • the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
  • the control server is invoked to find a node to be deployed corresponding to the node identifier to be deployed, and the node to be deployed is deployed as a Kubernetes slave node of the Kubernetes cluster.
  • a second aspect of an embodiment of the present application provides an apparatus for automatically deploying a Kubernetes slave node, which may include means for implementing the steps of the method of automatically deploying a Kubernetes slave node as described above.
  • a third aspect of the embodiments of the present application provides a terminal device, including a memory and a processor, where the computer stores computer readable instructions executable on the processor, the processor executing the computer
  • the steps of the above method of automatically deploying the Kubernetes slave node are implemented when the instruction is read.
  • a fourth aspect of embodiments of the present application provides a computer readable storage medium storing computer readable instructions that, when executed by a processor, implement the above-described auto-deployment Kubernetes slave node The steps of the method.
  • the embodiment of the present application obtains the request information of the user, and the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node, and the primary node of the Kubernetes is found by the primary node identifier, thereby determining the primary node of the Kubernetes.
  • the Kubernetes cluster and finds the node identifier to be deployed corresponding to the node parameter in the database, and finally calls the control server, and the control server searches for the node to be deployed corresponding to the node identifier to be deployed, and deploys the node to be deployed as a Kubernetes cluster.
  • the Kubernetes slave node after obtaining the request information of the user, implements the automatic deployment of the Kubernetes slave node, reduces the possibility of error in the deployment process, saves manpower, and improves the deployment efficiency.
  • FIG. 1 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in the first embodiment of the present application
  • FIG. 2 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 2 of the present application;
  • FIG. 3 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 3 of the present application;
  • FIG. 4 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 4 of the present application;
  • FIG. 5 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 5 of the present application;
  • FIG. 6 is an implementation structural diagram of a method for automatically deploying a Kubernetes slave node in Embodiment 6 of the present application;
  • FIG. 7 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 7 of the present application;
  • Embodiment 8 is a structural block diagram of an apparatus for automatically searching for logistics information in Embodiment 8 of the present application;
  • FIG. 9 is a schematic diagram of a terminal device in Embodiment 9 of the present application.
  • FIG. 1 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node according to an embodiment of the present application. As shown in Figure 1, the method includes the following steps:
  • S101 Acquire user request information, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node.
  • the automatic deployment of the new Kubernetes slave node is implemented on the basis of the existing Kubernetes cluster.
  • Kubernetes is an open source platform for automated container operations. It can implement the functions of container deployment, scheduling, and inter-cluster expansion.
  • the physical machine nodes or virtual machines configured with Kubernetes environment are called Kubernetes nodes.
  • Kubernetes Cluster (Kubernetes Cluster) is composed of multiple Kubernetes nodes, which can realize the deployment and management of containers.
  • the Kubernetes Master which is responsible for scheduling and managing Kubernetes services, such as allocating a container of a service to a slave node of the Kubernetes cluster.
  • the Kubernetes master node contains four subcomponents, namely the database (Etcd) component and the interface service (Kube).
  • the ApiServer) component, the Kube Scheduler component and the Kube Controller Manager component in the embodiment of the present application, mainly relate to an interface service component of the Kubernetes master node, and the interface service component is configured to receive and process the request for the Kubernetes cluster.
  • the Kubernetes cluster also includes a Kubernetes Node (Kubernetes Node) for actually running the container allocated by the Kubernetes master node.
  • the nodes to be joined to the Kubernetes cluster are determined from the plurality of nodes of the slave node resource pool, wherein the slave node resource pool refers to a runnable physical node or virtual Machine collection.
  • the slave node resource pool refers to a runnable physical node or virtual Machine collection.
  • the cloud host virtual machine is a virtual host similar to the independent host on the cluster host, but it should be known that the slave node
  • the resource pool can also be used to store other physical nodes, such as physical servers, for use in the embodiments of the present application.
  • the user's request information is first obtained, wherein the request information includes the primary node identifier and the node parameter.
  • the master node identifier is an Internet Protocol Address (IP) provided by the interface service component of the Kubernetes master node, and the corresponding Kubernetes master node can be automatically found through the master node identifier, thereby finding the corresponding Kubernetes cluster.
  • IP Internet Protocol Address
  • the node parameter is related to the physical node or the virtual machine node, and is used to find a specific physical node or virtual machine node to be added to the Kubernetes cluster.
  • the physical node or the virtual machine node has the node identifier, and the node parameter and the node identifier.
  • the node parameters can be used to indicate the node identification, thereby indicating the physical node or the virtual machine node.
  • the front end page based on the Kubernetes cluster and the slave node resource pool is provided to the user, and the request information sent by the user through the front end page is received.
  • the Kubernetes cluster Since the Kubernetes cluster is established, you can obtain the cluster information of the Kubernetes cluster, such as the name of the Kubernetes cluster and the existing Kubernetes slave node name, and obtain the feature information of multiple cloud host virtual machines in the node resource pool, such as multiple clouds.
  • the name of the host virtual machine, etc., through the user interface provided by the Kubernetes cluster, the cluster information and feature information are integrated into the front-end page for display.
  • the request information corresponding to the user's selection result is automatically generated, and the request information is sent.
  • the Kubernetes cluster replaces the user query node and manually sends the request information, which improves the convenience of request information generation. It is worth mentioning that in order to improve the security of the request information, the request information is based on the Secure Sockets Layer-based hypertext transfer protocol (Hyper Text Transfer Protocol over Secure Socket Layer (HTTPS) format is sent.
  • HTTPS Secure Sockets Layer-based hypertext transfer protocol
  • the cluster information of the Kubernetes cluster and the feature information of the plurality of cloud host virtual machines related to the user equipment are provided to the user.
  • the front-end page including the cluster information and the feature information related to the user equipment is provided to the user according to the authority of the user equipment. For example, there may be multiple Kubernetes clusters. After determining the Kubernetes cluster to which the user equipment belongs, only the cluster information and feature information related to the Kubernetes cluster are provided to the user, which improves the security of the Kubernetes slave node process.
  • S102 Search for a Kubernetes cluster corresponding to the primary node identifier, and obtain a node identifier to be deployed corresponding to the node parameter from a database.
  • the request information is parsed to obtain the primary node identifier and the node parameter therein.
  • the corresponding Kubernetes primary node is found according to the primary node identifier, thereby determining the Kubernetes cluster corresponding to the Kubernetes primary node, and searching for the to-be-deployed node identifier corresponding to the node parameter from the database.
  • a Kubernetes Manager may be set to manage the Kubernetes cluster, and the operations of step S101 and step S102 are performed.
  • the management program parses the request information, and acquires the primary node identifier and the node parameter in the request information. Since the hypervisor may manage multiple Kubernetes clusters at the same time, after obtaining the primary node identifier, the Kubernetes cluster corresponding to the primary node identifier is determined, and the data interface provided by the database is called, and the node parameters are searched from multiple node identifiers in the database. Corresponding node ID to be deployed. It is worth mentioning that if there are multiple Kubernetes clusters, the database is shared by multiple Kubernetes clusters.
  • S103 Call a control server to find a node to be deployed corresponding to the node identifier to be deployed, and deploy the node to be deployed as a Kubernetes slave node of the Kubernetes cluster.
  • the hypervisor After obtaining the node identifier to be deployed, the hypervisor calls the control server, specifically calling the control server to add the node's add interface.
  • the control server is a standalone server and is not controlled by the Kubernetes cluster, and is mainly set in the control slave node.
  • a plurality of cloud host virtual machines in the node resource pool have corresponding node identifiers. Therefore, after obtaining the node identifiers to be deployed, the control server searches for the cloud host virtual machines corresponding to the node identifiers to be deployed in the slave node resource pool.
  • the node to be deployed and the nodes to be deployed are controlled and deployed as Kubernetes slaves of the Kubernetes cluster.
  • FIG. 6 shows an implementation architecture diagram of a method for automatically deploying a Kubernetes slave node.
  • the architecture diagram has a hypervisor configured to manage a Kubernetes cluster and a slave node resource pool to store multiple cloud host virtual machines.
  • the premise reflects the entire process of automatically deploying Kubernetes from a node.
  • the management program receives the request information of the user, and parses the primary node identifier and the node parameter in the request information, finds the corresponding Kubernetes cluster according to the primary node identifier, and finds the corresponding node identifier to be deployed from the database according to the node parameter.
  • the hypervisor invokes the control server, and the control server determines the cloud host virtual machine identified from the node to be deployed from the plurality of cloud host virtual machines of the node resource pool, and deploys the cloud host virtual machine as a Kubernetes Nodes are added to the Kubernetes cluster.
  • the embodiment shown in FIG. 1 shows that, in the embodiment of the present application, by acquiring the request information of the user, the request information includes the primary node identifier and the node parameter, and firstly, the object added by the Kubernetes slave node, that is, the identifier corresponding to the master node identifier is determined.
  • the Kubernetes cluster obtains the node identifier to be deployed corresponding to the node parameter from the database, and finally finds the node to be deployed corresponding to the node identifier to be deployed through the control server, and deploys the node to be deployed as the Kubernetes slave node of the Kubernetes cluster. Reduced manual operations and improved deployment efficiency of Kubernetes slave nodes by building automated deployments.
  • FIG. 2 is a flowchart of an implementation method of automatically deploying a Kubernetes slave node according to Embodiment 2 of the present application.
  • the embodiment refines the process after S102 to obtain S201 ⁇ S203, which are as follows:
  • S201 Acquire multiple available node identifiers of multiple available nodes in the node resource pool, and compare the to-be-deployed node identifier with the multiple available node identifiers.
  • the node stored in the node resource pool is a cloud host virtual machine
  • the slave node resource pool may also store other nodes such as physical nodes. It is used in the embodiment of the present application. Since the node parameter is included in the request information, and the request information is automatically generated according to the user selecting a cloud host virtual machine, the user can specify a cloud host virtual machine to select by writing a code, but in the present application In an embodiment, the user selects a cloud host virtual machine displayed on the front-end page by clicking on the front-end page of the Kubernetes cluster and the slave node resource pool, thereby completing the selection, and the background is completed.
  • the corresponding request information is automatically generated.
  • the state of the cloud host virtual machine cannot be known.
  • the front-end page is not updated in real time according to the status of the Kubernetes cluster and the slave node resource pool, and the request information is generated when the request information is generated.
  • the cloud host VM to which the node ID of the node to be deployed corresponds to the node parameter may have joined the Kubernetes cluster or other Kubernetes cluster corresponding to the master node identifier.
  • the plurality of available node identifiers of the plurality of available nodes in the node resource pool are obtained, and the node identifier to be deployed is compared with the plurality of available node identifiers.
  • the plurality of available nodes refers to a plurality of cloud host virtual machines in the slave resource pool that are not joined to the Kubernetes cluster and are in a running state.
  • the plurality of available node identities of the slave node resource pool are updated, and the plurality of available node identities are stored in the database.
  • the status of the node in the slave node resource pool is updated, and then multiple available nodes and corresponding multiple available node identifiers are updated.
  • the update may be real-time update, and the time interval may be updated.
  • multiple available node identifiers are stored in the database, and the deposit mode may be to newly create a state data table dedicated to storing the available node identifiers in the database.
  • multiple available node identifiers are obtained from the database, which reduces the operation complexity and improves the accuracy of the node selection to be deployed.
  • the node identifier to be deployed is compared with the plurality of available node identifiers. If the node identifier to be deployed is successfully matched with the one of the plurality of available node identifiers, the node identifier corresponding to the node to be deployed is proved to be corresponding. If the cloud host virtual machine is available, the node corresponding to the successfully available node identifier (the cloud host virtual machine) is used as the node to be deployed.
  • the error prompt may include a cloud host virtual machine corresponding to the node identifier to be deployed, so that the user can view the cloud host virtual machine.
  • multiple available node identifiers of multiple available nodes in the node resource pool are obtained, and the node identifier to be deployed is compared with multiple available node identifiers. And determining the validity of the node identifier to be deployed. If the node identifier to be deployed is successfully matched with the one of the plurality of available node identifiers, the node identifier to be deployed is valid, and the node corresponding to the successfully available node identifier is compared as the node to be deployed.
  • the node ID of the node to be deployed fails to be compared with the number of available node identifiers, the node ID of the node to be deployed is invalid, and an error message is sent to the user.
  • the validity of the node ID to be deployed is verified before the node to be deployed is determined. Waste, after the validity verification is passed, the node corresponding to the node ID to be deployed is guaranteed to be available.
  • FIG. 3 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node according to Embodiment 3 of the present application.
  • the S103 is refined to obtain S301 ⁇ S302, which are as follows:
  • S301 Automatically send a deployment instruction corresponding to the request information to the control server.
  • the agent client On the node to be deployed (the cloud host VM) corresponding to the node ID to be deployed, the agent client (Agent) is pre-installed to operate the node to be deployed. Since the node to be deployed needs to be deployed as a Kubernetes slave node of the Kubernetes cluster, the management program of the Kubernetes cluster parses the content of the request information after receiving the request information from the user, and calls the control interface of the control server to the control. The server sends a deployment instruction corresponding to the request information, and preferably, the deployment instruction is sent in an HTTPS format.
  • S302 Establish a socket connection between the control server and the proxy client, so that after receiving the deployment instruction, the control server causes the proxy client to deploy the proxy according to the deployment instruction.
  • the node is deployed as the Kubernetes slave node of the Kubernetes cluster.
  • the control server establishes a socket connection with the proxy client, and the socket substantially provides the endpoint of the communication, so before the communication is implemented, the two parties first create an endpoint, that is, the control server.
  • Socket and proxy client sockets The establishment process of a socket connection is mainly divided into three steps, namely, controlling server monitoring, proxy client request, and connection confirmation.
  • the control server socket is in a waiting state, monitoring the connection request in the network in real time, and not looking for a specific proxy client socket; in the proxy client request process, the node to be deployed is installed.
  • the proxy client socket first obtains information about the control server socket, including controlling the address and port number of the server socket, and then sending a connection request to the control server socket; during the connection confirmation process, the control server set The connection successfully listens to the connection request of the proxy client socket, and in response to the connection request, sends information about the control server socket to the proxy client socket, and finally the proxy client socket control server After the socket information is related, the establishment of the socket connection is completed.
  • the proxy client After the control server establishes a socket connection with the proxy client, the proxy client is controlled based on the received deployment instruction, so that the proxy client performs the deployment operation on the node to be deployed according to the deployment instruction, and the node to be deployed is added.
  • the deployment instruction corresponding to the request information is automatically sent to the control server, and the control server establishes a socket connection with the proxy client, so that the control server receives the connection.
  • the control agent client deploys the node to be deployed as the Kubernetes slave node of the Kubernetes cluster according to the deployment instruction, and performs the deployment operation through the pre-installed proxy client, thereby further improving the automation degree of deploying the Kubernetes slave node, and controlling through A socket connection established between the server and the proxy client improves the stability of the deployment operation.
  • FIG. 4 is a flowchart of an implementation of a method for automatically searching for logistics information according to Embodiment 4 of the present application.
  • the process before S302 is refined to obtain S401 ⁇ S403, which are as follows:
  • S401 Extract parameters other than the authentication signature value from the deployment instruction, and calculate a calculated signature value of the parameter by using a parameter signature algorithm.
  • the control server performs an authentication operation.
  • the hypervisor extracts the parameters in the deployment instruction and sorts them, and calculates the authentication signature value through the parameter signature algorithm based on the sorted parameters and the customized string token.
  • the parameter signature algorithm is combined with a hash message authentication code (Hash-based)
  • the message authentication code (HMAC) and the Secure Hash Algorithm (SHA) algorithm are the HAMC-SHA1 signature authentication algorithm.
  • the calculated authentication signature value is the signature summary based on the sorted parameters and the token.
  • the control server extracts parameters other than the authentication signature value, sorts the parameters, and calculates the calculation based on the sorted parameters and the same string token through the HAMC-SHA1 signature authentication algorithm. Signature value.
  • control server compares the calculated signature value with the authentication signature value. If the calculated signature value is equal to the authentication signature value, the control server authenticates and continues to perform the socket connection between the control server and the proxy client. operating.
  • the authentication fails, and the control server does not have the authority to execute the deployment instruction, and then the subsequent operations are stopped.
  • the embodiment of the present application performs authentication before establishing a socket connection, thereby improving the security of the connection.
  • FIG. 5 is a flowchart of an implementation method for automatically deploying a Kubernetes slave node according to Embodiment 5 of the present application.
  • S103 is refined to obtain S501 ⁇ S503, which are as follows:
  • S501 Obtain a binary file related to the Kubernetes cluster from a file server.
  • S502 Acquire a slave service file in the binary file, and obtain a node address of the node to be deployed.
  • the binary file obtained from the file server includes a master node service file related to the Kubernetes master node and a slave node service file related to the Kubernetes slave node, wherein the master node service file includes an interface service component file and a scheduling component file, and the like.
  • the slave service files include Kubectl files, Kubelet files, and Kube-Proxy files. Therefore, in the embodiment of the present application, the slave node server file is obtained from the binary file, and the node address of the node to be deployed is obtained for deployment.
  • the master node identifier and the slave node identifier are respectively set for the master node service file and the slave node service file.
  • the slave node identifier is set to the slave node server and stored in the file server, when the node to be deployed is deployed later, the slave node service file is directly obtained from the file server according to the slave node identifier, and the master node is not acquired.
  • the service file and the binary file of the slave service file save the extraction of the binary file and improve the deployment efficiency.
  • S503 Automatically configure the slave node service based on the slave node service file and the node address.
  • the node to be deployed is cleaned.
  • the original configuration of the node to be deployed is cleared.
  • a certificate related to the Kubernetes cluster is generated, which mainly includes a certification authority (CA) certificate, a kubernetes certificate, an admin certificate, and a proxy certificate.
  • CA certification authority
  • a key file corresponding to the above certificate is also generated. Subsequent distribution of the key file, the specific distribution process will be described later.
  • the flannel service is configured and started based on the slave service file.
  • the flannel is the network planning service for Kubernetes, so that the docker containers created by the different Kubernetes slave nodes in the Kubernetes cluster have the full Kubernetes cluster.
  • the only virtual IP address on the other hand, the flannel is essentially the overlay network, that is, the transmission control protocol (Transmission) Control Protocol, TCP) Packets are encapsulated in another network packet for routing and communication.
  • TCP Transmission Control Protocol
  • Kubectl is a console tool for Kubernetes cluster management, which can provide users with a large number of commands for users to view Kubernetes cluster.
  • Kubelet is a container management tool on the Kubernetes slave node, which is used to process the task that the Kubernetes master node delivers to the Kubernetes slave node.
  • Kubelet registers the Kubernetes slave node information on the interface service component of the Kubernetes master node, and periodically reports to the Kubernetes master.
  • the node sends the resource usage of the Kubernetes slave node, while monitoring the container and node resources inside the Kubernetes slave node.
  • Kube-Proxy is the entry component of the Kubernetes service and is used to manage access to the Kubernetes service. Based on the node service file and the node address of the node to be deployed, the Kubectl service, the Kubelet service, and the Kube-Proxy service are sequentially deployed and started. It is worth mentioning that the CA certificate and the admin certificate corresponding key file are distributed to the Kubectl service. The CA certificate corresponding key file is distributed to the Kubelet service, and the CA certificate and the proxy certificate corresponding key file are distributed to the Kube-Proxy service. Finally, when the Kube-Proxy service is successfully deployed and started, the node to be deployed is successfully deployed as Kubernetes. From the node.
  • FIG. 7 is a flowchart showing an implementation of deploying a node to be deployed after setting a retry and recording mechanism.
  • Each of x, y, and z is an integer greater than zero.
  • the state of the node to be deployed is checked, and the binary is obtained.
  • Files, clean up the nodes to be deployed, generate certificate related files, configure and start the flannel service, install and start the container, configure and start the Kubectl service, configure and start the Kubelet service, and configure and start the Kube-Proxy service.
  • Mechanism and set the threshold of the number of times, when a link is successful, the next step.
  • the threshold of the number of inspections is y, y is greater than x, and the number of times the node to be deployed is not ready does not reach the check.
  • the link to check the status of the node to be deployed is re-entered. After all the links are passed, the deployment of the node to be deployed is completed.
  • the above method improves the fault tolerance of the node to be deployed through the retry and record mechanism, and facilitates the developer to locate the problem by recording the failed link. It is worth mentioning that the flowchart shown in FIG. 7 is only an example. In the actual application scenario, the threshold of the number of times and the waiting time of each link can be freely set according to actual conditions.
  • FIG. 8 is a structural block diagram of an apparatus for automatically deploying a Kubernetes slave node according to an embodiment of the present application.
  • the device is provided. include:
  • the obtaining unit 81 is configured to acquire request information of the user, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
  • the searching unit 82 is configured to search for a Kubernetes cluster corresponding to the primary node identifier, and obtain, from the database, a node identifier to be deployed corresponding to the node parameter;
  • the deployment unit 83 is configured to invoke a control server to find a node to be deployed corresponding to the node identifier to be deployed, and deploy the node to be deployed as a Kubernetes slave node of the Kubernetes cluster.
  • the searching unit 82 further includes:
  • An identifier obtaining unit configured to acquire a plurality of available node identifiers of the plurality of available nodes in the node resource pool, and compare the node identifier to be deployed with the plurality of available node identifiers;
  • a node determining unit configured to use, as the node to be deployed, a node corresponding to the successfully available node identifier, if the comparison between the node identifier to be deployed and the identifier of the multiple available node identifiers is successful;
  • an output unit configured to output an error prompt to the user if the comparison between the node identifier to be deployed and the multiple available node identifiers fails.
  • the deployment unit 83 includes:
  • An instruction sending unit configured to automatically send a deployment instruction corresponding to the request information to the control server
  • connection establishing unit configured to establish a socket connection between the control server and the proxy client, so that after receiving the deployment instruction, the control server causes the proxy client to perform the The node to be deployed is deployed as the Kubernetes slave node of the Kubernetes cluster.
  • connection establishing unit further includes:
  • a calculating unit configured to extract a parameter other than the authentication signature value from the deployment instruction, and calculate a calculated signature value of the parameter by using a parameter signature algorithm
  • An execution unit configured to perform an operation of establishing a socket connection between the control server and the proxy client if the authentication signature value is equal to the calculated signature value
  • Stopping the execution unit if the verification signature value is not equal to the calculated signature value, stopping performing the subsequent operation.
  • the deployment unit 83 includes:
  • a file obtaining unit configured to acquire a binary file related to the Kubernetes cluster from a file server
  • An address obtaining unit configured to acquire a slave node service file in the binary file, and obtain a node address of the node to be deployed;
  • a service configuration unit configured to automatically configure a slave node service based on the slave node service file and the node address.
  • FIG. 9 is a schematic diagram of a terminal device according to an embodiment of the present application.
  • the terminal device 9 of this embodiment includes a processor 90 and a memory 91 in which computer readable instructions 92 executable on the processor 90 are stored, for example, a Kubernetes slave node is deployed. program of.
  • the processor 90 when executing the computer readable instructions 92, implements the functions of the various units of the apparatus embodiments described above, such as the functions of units 81 through 83 of FIG.
  • the computer readable instructions 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90, To complete this application.
  • the one or more modules/units may be a series of computer readable instruction segments capable of performing a particular function, the instruction segments being used to describe the execution of the computer readable instructions 92 in the terminal device 9.
  • the computer readable instructions 92 can be partitioned into an acquisition unit, a lookup unit, and a deployment unit, each unit having a specific function as described above.
  • the terminal device may include, but is not limited to, a processor 90 and a memory 91. It will be understood by those skilled in the art that FIG. 9 is only an example of the terminal device 9, does not constitute a limitation of the terminal device 9, may include more or less components than those illustrated, or combine some components, or different components.
  • the terminal device may further include an input/output device, a network access device, a bus, and the like.
  • the so-called processor 90 can be a central processing unit (Central Processing Unit, CPU), can also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9.
  • the memory 91 may also be an external storage device of the terminal device 9, for example, a plug-in hard disk equipped on the terminal device 9, a smart memory card (SMC), and a secure digital (SD). Card, flash card, etc. Further, the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device.
  • the memory 91 is configured to store the computer readable instructions and other programs and data required by the terminal device.
  • the memory 91 can also be used to temporarily store data that has been output or is about to be output.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present solution is applicable to the technical field of data processing and provides a method for automatically deploying a Kubernetes worker node, a terminal apparatus, and a computer readable storage medium. The method comprises: acquiring request information of a user, the request information comprising a master node identifier and a node parameter, and the node parameter indicating a physical node or a virtual machine node; searching for a Kubernetes cluster corresponding to the master node identifier, and acquiring, from a database, an identifier of a node to be deployed corresponding to the node parameter; and calling a control server, so as to search for the node to be deployed corresponding to the identifier of the node to be deployed, and deploying the node as a Kubernetes worker node of the Kubernetes cluster. The present solution realizes automatic deployment of Kubernetes worker nodes, reduces the possibility of error during a deployment process, and significantly improves deployment efficiency.

Description

自动部署Kubernetes从节点的方法、装置、终端设备及可读存储介质Method, device, terminal device and readable storage medium for automatically deploying Kubernetes slave nodes
本申请要求于2018年03月31日提交中国专利局、申请号为201810277483.8、发明名称为“自动部署Kubernetes从节点的方法及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to Chinese Patent Application No. 201810277483.8, entitled "Automatic Deployment of Kubernetes Slave Node Method and Terminal Equipment", filed on March 31, 2018, the entire contents of which are incorporated by reference. In this application.
技术领域Technical field
本申请属于数据处理技术领域,尤其涉及一种自动部署Kubernetes从节点的方法、装置、终端设备及计算机可读存储介质。The present application belongs to the field of data processing technologies, and in particular, to a method, an apparatus, a terminal device, and a computer readable storage medium for automatically deploying a Kubernetes slave node.
背景技术Background technique
传统的虚拟化技术在性能和资源使用率等方面存在不足,而Docker提供的容器技术通过将单个操作系统管理的资源划分到孤立的组中,提升了资源使用率,逐渐成为研究热门。该容器技术允许在同一台主机或虚拟机上运行若干个容器,每个容器都为一个独立的虚拟环境或应用。而Kubernetes作为一款开源的容器操作平台,其可以实现将若干个容器组合成一个服务及动态地分配容器运行的主机等功能,为用户使用容器提供了极大的便利。Traditional virtualization technologies have shortcomings in terms of performance and resource utilization. Docker provides container technology that has become a research hotspot by dividing the resources managed by a single operating system into isolated groups and improving resource utilization. This container technology allows several containers to run on the same host or virtual machine, each of which is a separate virtual environment or application. Kubernetes, as an open source container operation platform, can realize the functions of combining several containers into one service and dynamically allocating the host running the container, which provides great convenience for users to use the container.
在Kubernetes集群中,包括主节点和从节点两类节点,其中主节点负责对Kubernetes集群中的资源进行管控和调度,而从节点负责承载容器运行,是容器的宿主机。在现有技术中,如果想在Kubernetes集群中部署新的从节点,需要人工在对应的主机或虚拟机上安装大量的组件,以及进行大量配置。故现有的部署Kubernetes从节点的方法主要依靠人工操作,部署过程较为繁琐,并且在部署过程中容易发生配置错误,出错率高。In the Kubernetes cluster, there are two types of nodes: the master node and the slave node. The master node is responsible for the management and scheduling of resources in the Kubernetes cluster, while the slave node is responsible for hosting the container and is the host of the container. In the prior art, if a new slave node is to be deployed in a Kubernetes cluster, a large number of components need to be manually installed on the corresponding host or virtual machine, and a large number of configurations are required. Therefore, the existing method of deploying the Kubernetes slave node mainly relies on manual operations, and the deployment process is cumbersome, and configuration errors are likely to occur during the deployment process, and the error rate is high.
技术问题technical problem
有鉴于此,本申请实施例提供了一种自动部署Kubernetes从节点的方法、装置、终端设备及计算机可读存储介质,以解决现有技术中Kubernetes从节点部署较为繁琐,并且出错率高的问题。In view of this, the embodiment of the present application provides a method, an apparatus, a terminal device, and a computer readable storage medium for automatically deploying a Kubernetes slave node, so as to solve the problem that the Kubernetes slave node deployment is cumbersome and has a high error rate in the prior art. .
技术解决方案Technical solution
本申请实施例的第一方面提供了一种自动部署Kubernetes从节点的方法,包括:A first aspect of the embodiments of the present application provides a method for automatically deploying a Kubernetes slave node, including:
获取用户的请求信息,所述请求信息包括主节点标识和节点参数,所述节点参数用于指示物理节点或虚拟机节点;Obtaining request information of the user, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
查找与所述主节点标识对应的Kubernetes集群,并从数据库中获取与所述节点参数对应的待部署节点标识;Finding a Kubernetes cluster corresponding to the primary node identifier, and obtaining, from the database, a node identifier to be deployed corresponding to the node parameter;
调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点。The control server is invoked to find a node to be deployed corresponding to the node identifier to be deployed, and the node to be deployed is deployed as a Kubernetes slave node of the Kubernetes cluster.
本申请实施例的第二方面提供了一种自动部署Kubernetes从节点的装置,可以包括用于实现上述自动部署Kubernetes从节点的方法的步骤的单元。A second aspect of an embodiment of the present application provides an apparatus for automatically deploying a Kubernetes slave node, which may include means for implementing the steps of the method of automatically deploying a Kubernetes slave node as described above.
本申请实施例的第三方面提供了一种终端设备,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述自动部署Kubernetes从节点的方法的步骤。A third aspect of the embodiments of the present application provides a terminal device, including a memory and a processor, where the computer stores computer readable instructions executable on the processor, the processor executing the computer The steps of the above method of automatically deploying the Kubernetes slave node are implemented when the instruction is read.
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现上述自动部署Kubernetes从节点的方法的步骤。A fourth aspect of embodiments of the present application provides a computer readable storage medium storing computer readable instructions that, when executed by a processor, implement the above-described auto-deployment Kubernetes slave node The steps of the method.
有益效果Beneficial effect
本申请实施例通过获取用户的请求信息,请求信息包括主节点标识和节点参数,其中节点参数用于指示物理节点或虚拟机节点,通过主节点标识查找到Kubernetes主节点,进而确定Kubernetes主节点所在的Kubernetes集群,并在数据库中查找出与节点参数对应的待部署节点标识,最后调用控制服务器,通过控制服务器查找与待部署节点标识对应的待部署节点,将该待部署节点部署为Kubernetes集群的Kubernetes从节点,本申请实施例在获取到用户的请求信息后,实现了Kubernetes从节点的自动部署,降低了部署过程中出错的可能性,并且节省了人力,提升了部署效率。The embodiment of the present application obtains the request information of the user, and the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node, and the primary node of the Kubernetes is found by the primary node identifier, thereby determining the primary node of the Kubernetes. The Kubernetes cluster, and finds the node identifier to be deployed corresponding to the node parameter in the database, and finally calls the control server, and the control server searches for the node to be deployed corresponding to the node identifier to be deployed, and deploys the node to be deployed as a Kubernetes cluster. The Kubernetes slave node, after obtaining the request information of the user, implements the automatic deployment of the Kubernetes slave node, reduces the possibility of error in the deployment process, saves manpower, and improves the deployment efficiency.
附图说明DRAWINGS
图1是本申请实施例一中自动部署Kubernetes从节点的方法的实现流程图;1 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in the first embodiment of the present application;
图2是本申请实施例二中自动部署Kubernetes从节点的方法的实现流程图;2 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 2 of the present application;
图3是本申请实施例三中自动部署Kubernetes从节点的方法的实现流程图;3 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 3 of the present application;
图4是本申请实施例四中自动部署Kubernetes从节点的方法的实现流程图;4 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 4 of the present application;
图5是本申请实施例五中自动部署Kubernetes从节点的方法的实现流程图;5 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 5 of the present application;
图6是本申请实施例六中自动部署Kubernetes从节点的方法的实现结构图;6 is an implementation structural diagram of a method for automatically deploying a Kubernetes slave node in Embodiment 6 of the present application;
图7是本申请实施例七中自动部署Kubernetes从节点的方法的实现流程图;7 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node in Embodiment 7 of the present application;
图8是本申请实施例八中自动查找物流信息的装置的结构框图;8 is a structural block diagram of an apparatus for automatically searching for logistics information in Embodiment 8 of the present application;
图9是本申请实施例九中终端设备的示意图。FIG. 9 is a schematic diagram of a terminal device in Embodiment 9 of the present application.
本发明的实施方式Embodiments of the invention
为了对本申请的技术特征、目的和效果有更加清楚的理解,现对照附图详细说明本申请的具体实施方式。In order to more clearly understand the technical features, objects and effects of the present application, the specific embodiments of the present application will be described in detail with reference to the accompanying drawings.
请参阅图1,图1是本申请实施例提供的一种自动部署Kubernetes从节点的方法的实现流程图。如图1所示,该方法包括以下步骤:Please refer to FIG. 1. FIG. 1 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node according to an embodiment of the present application. As shown in Figure 1, the method includes the following steps:
S101:获取用户的请求信息,所述请求信息包括主节点标识和节点参数,所述节点参数用于指示物理节点或虚拟机节点。S101: Acquire user request information, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node.
在本申请实施例中,在已有Kubernetes集群的基础上,实现新的Kubernetes从节点的自动部署。为了便于说明,首先介绍Kubernetes及Kubernetes集群的相关内容。Kubernetes为一款自动化容器操作的开源平台,能够实现对容器的部署、调度以及节点集群间扩展等功能,将配置有Kubernetes环境的物理机节点或虚拟机称作Kubernetes节点。通常来说,Kubernetes集群(Kubernetes Cluster)由多个Kubernetes节点组建而成,可实现对容器的部署和管理。在一个Kubernetes集群内,有且只有一套控制单元,即Kubernetes主节点(Kubernetes Master),主要负责调度和管理Kubernetes服务,如分配某个服务的某个容器到Kubernetes集群的某个从节点上,Kubernetes主节点包含四个子组件,分别为数据库(Etcd)组件、接口服务(Kube ApiServer)组件、调度(Kube Scheduler)组件和控制(Kube Controller Manager)组件,在本申请实施例中,主要涉及到Kubernetes主节点的接口服务组件,接口服务组件用于接收及处理对Kubernetes集群的请求。除了Kubernetes主节点之外,Kubernetes集群内还包括Kubernetes从节点(Kubernetes Node),用于实际运行由Kubernetes主节点分配的容器。In the embodiment of the present application, the automatic deployment of the new Kubernetes slave node is implemented on the basis of the existing Kubernetes cluster. For the sake of explanation, first introduce the relevant content of Kubernetes and Kubernetes cluster. Kubernetes is an open source platform for automated container operations. It can implement the functions of container deployment, scheduling, and inter-cluster expansion. The physical machine nodes or virtual machines configured with Kubernetes environment are called Kubernetes nodes. In general, Kubernetes Cluster (Kubernetes Cluster) is composed of multiple Kubernetes nodes, which can realize the deployment and management of containers. Within a Kubernetes cluster, there is one and only one control unit, the Kubernetes Master, which is responsible for scheduling and managing Kubernetes services, such as allocating a container of a service to a slave node of the Kubernetes cluster. The Kubernetes master node contains four subcomponents, namely the database (Etcd) component and the interface service (Kube). The ApiServer) component, the Kube Scheduler component and the Kube Controller Manager component, in the embodiment of the present application, mainly relate to an interface service component of the Kubernetes master node, and the interface service component is configured to receive and process the request for the Kubernetes cluster. . In addition to the Kubernetes master node, the Kubernetes cluster also includes a Kubernetes Node (Kubernetes Node) for actually running the container allocated by the Kubernetes master node.
本申请实施例在已建立Kubernetes主节点和Kubernetes集群的基础上,从从节点资源池的多个节点中确定待加入Kubernetes集群的节点,其中,从节点资源池是指可运行的物理节点或虚拟机的集合。为了便于解释,只针对从节点资源池内存放多个云主机虚拟机的情况进行说明,其中,云主机虚拟机是集群主机上虚拟出的类似于独立主机的部分,但应获知的是,从节点资源池也可存放其他如物理服务器的物理节点,以应用于本申请实施例中。为了确定待加入Kubernetes集群的节点,首先获取用户的请求信息,其中,请求信息包括主节点标识和节点参数。一般来说,主节点标识为Kubernetes主节点的接口服务组件向外提供的互联网协议地址(Internet Protocol Address,IP),通过主节点标识即可自动查找到对应的Kubernetes主节点,从而查找到对应的Kubernetes集群。而节点参数与物理节点或虚拟机节点相关,用于查找具体的待加入Kubernetes集群的物理节点或虚拟机节点,具体来说,物理节点或虚拟机节点都具备节点标识,而节点参数与节点标识具有唯一的对应关系,故节点参数可用于指示节点标识,从而指示物理节点或虚拟机节点。In the embodiment of the present application, on the basis that the Kubernetes master node and the Kubernetes cluster have been established, the nodes to be joined to the Kubernetes cluster are determined from the plurality of nodes of the slave node resource pool, wherein the slave node resource pool refers to a runnable physical node or virtual Machine collection. For ease of explanation, only the case where multiple cloud host virtual machines are stored in the node resource pool is described. The cloud host virtual machine is a virtual host similar to the independent host on the cluster host, but it should be known that the slave node The resource pool can also be used to store other physical nodes, such as physical servers, for use in the embodiments of the present application. In order to determine the node to be added to the Kubernetes cluster, the user's request information is first obtained, wherein the request information includes the primary node identifier and the node parameter. Generally, the master node identifier is an Internet Protocol Address (IP) provided by the interface service component of the Kubernetes master node, and the corresponding Kubernetes master node can be automatically found through the master node identifier, thereby finding the corresponding Kubernetes cluster. The node parameter is related to the physical node or the virtual machine node, and is used to find a specific physical node or virtual machine node to be added to the Kubernetes cluster. Specifically, the physical node or the virtual machine node has the node identifier, and the node parameter and the node identifier. There is a unique correspondence, so the node parameters can be used to indicate the node identification, thereby indicating the physical node or the virtual machine node.
可选地,向用户提供基于Kubernetes集群和从节点资源池的前端页面,并接收用户通过前端页面发送的请求信息。由于Kubernetes集群已建立,故可获取Kubernetes集群的集群信息,如Kubernetes集群的名称和已有的Kubernetes从节点名称等,并获取从节点资源池内多个云主机虚拟机的特征信息,比如多个云主机虚拟机的名称等,通过Kubernetes集群提供的用户接口,将集群信息和特征信息整合至前端页面进行展示。用户可在用户设备上通过域名登录等方式查看该前端页面,并选择需要添加入Kubernetes集群的云主机虚拟机,选择完成后,自动生成与用户的选择结果对应的请求信息,并将请求信息发送至Kubernetes集群,取代了用户查询节点及手动发送请求信息的操作,提升了请求信息生成的便利性。值得一提的是,为了提升请求信息的安全性,请求信息以基于安全套接层的超文本传输协议(Hyper Text Transfer Protocol over Secure Socket Layer,HTTPS)的格式进行发送。Optionally, the front end page based on the Kubernetes cluster and the slave node resource pool is provided to the user, and the request information sent by the user through the front end page is received. Since the Kubernetes cluster is established, you can obtain the cluster information of the Kubernetes cluster, such as the name of the Kubernetes cluster and the existing Kubernetes slave node name, and obtain the feature information of multiple cloud host virtual machines in the node resource pool, such as multiple clouds. The name of the host virtual machine, etc., through the user interface provided by the Kubernetes cluster, the cluster information and feature information are integrated into the front-end page for display. You can view the front-end page by using the domain name login method on the user device, and select the cloud host virtual machine that needs to be added to the Kubernetes cluster. After the selection is complete, the request information corresponding to the user's selection result is automatically generated, and the request information is sent. The Kubernetes cluster replaces the user query node and manually sends the request information, which improves the convenience of request information generation. It is worth mentioning that in order to improve the security of the request information, the request information is based on the Secure Sockets Layer-based hypertext transfer protocol (Hyper Text Transfer Protocol over Secure Socket Layer (HTTPS) format is sent.
进一步地,向用户提供与用户设备相关的Kubernetes集群的集群信息和多个云主机虚拟机的特征信息。在向用户提供前端页面的基础上,根据用户设备的权限,向用户提供包含与用户设备相关的集群信息和特征信息的前端页面。比如可能存在多个Kubernetes集群,在确定用户设备所属的Kubernetes集群后,只向用户提供与该Kubernetes集群相关的集群信息和特征信息,提升了部署Kubernetes从节点过程的安全性。Further, the cluster information of the Kubernetes cluster and the feature information of the plurality of cloud host virtual machines related to the user equipment are provided to the user. On the basis of providing the front-end page to the user, the front-end page including the cluster information and the feature information related to the user equipment is provided to the user according to the authority of the user equipment. For example, there may be multiple Kubernetes clusters. After determining the Kubernetes cluster to which the user equipment belongs, only the cluster information and feature information related to the Kubernetes cluster are provided to the user, which improves the security of the Kubernetes slave node process.
S102:查找与所述主节点标识对应的Kubernetes集群,并从数据库中获取与所述节点参数对应的待部署节点标识。S102: Search for a Kubernetes cluster corresponding to the primary node identifier, and obtain a node identifier to be deployed corresponding to the node parameter from a database.
在接收到用户的请求信息后,对请求信息进行解析,得到其中的主节点标识和节点参数。根据主节点标识查找到对应的Kubernetes主节点,从而确定与Kubernetes主节点对应的Kubernetes集群,并且从数据库中查找与节点参数对应的待部署节点标识。在本申请实施例中,为了提升部署Kubernetes从节点的准确性,可设置管理程序(Kubernetes Manager)管理Kubernetes集群,并执行步骤S101及步骤S102的操作。具体地,管理程序在获取用户的请求信息后,对请求信息进行解析,获取请求信息中的主节点标识和节点参数。由于管理程序可能同时管理多个Kubernetes集群,故在获取主节点标识后,确定与主节点标识对应的Kubernetes集群,并调用数据库提供的数据接口,从数据库中的多个节点标识中查找与节点参数对应的待部署节点标识。值得一提的是,若存在多个Kubernetes集群,则数据库为多个Kubernetes集群共用。After receiving the request information of the user, the request information is parsed to obtain the primary node identifier and the node parameter therein. The corresponding Kubernetes primary node is found according to the primary node identifier, thereby determining the Kubernetes cluster corresponding to the Kubernetes primary node, and searching for the to-be-deployed node identifier corresponding to the node parameter from the database. In the embodiment of the present application, in order to improve the accuracy of deploying the Kubernetes slave node, a Kubernetes Manager may be set to manage the Kubernetes cluster, and the operations of step S101 and step S102 are performed. Specifically, after acquiring the request information of the user, the management program parses the request information, and acquires the primary node identifier and the node parameter in the request information. Since the hypervisor may manage multiple Kubernetes clusters at the same time, after obtaining the primary node identifier, the Kubernetes cluster corresponding to the primary node identifier is determined, and the data interface provided by the database is called, and the node parameters are searched from multiple node identifiers in the database. Corresponding node ID to be deployed. It is worth mentioning that if there are multiple Kubernetes clusters, the database is shared by multiple Kubernetes clusters.
S103:调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点。S103: Call a control server to find a node to be deployed corresponding to the node identifier to be deployed, and deploy the node to be deployed as a Kubernetes slave node of the Kubernetes cluster.
获取到待部署节点标识后,管理程序调用控制服务器,具体调用控制服务器添加节点的添加接口,值得一提的是,控制服务器是独立服务器,并不受Kubernetes集群控制,主要被设置于控制从节点资源池内的云主机虚拟机。从节点资源池内的多个云主机虚拟机都具有对应的节点标识,故控制服务器在获取到待部署节点标识后,在从节点资源池中查找与待部署节点标识对应的云主机虚拟机,作为待部署节点,并控制待部署节点,将其部署为Kubernetes集群的Kubernetes从节点。After obtaining the node identifier to be deployed, the hypervisor calls the control server, specifically calling the control server to add the node's add interface. It is worth mentioning that the control server is a standalone server and is not controlled by the Kubernetes cluster, and is mainly set in the control slave node. A cloud host virtual machine in a resource pool. A plurality of cloud host virtual machines in the node resource pool have corresponding node identifiers. Therefore, after obtaining the node identifiers to be deployed, the control server searches for the cloud host virtual machines corresponding to the node identifiers to be deployed in the slave node resource pool. The node to be deployed and the nodes to be deployed are controlled and deployed as Kubernetes slaves of the Kubernetes cluster.
图6示出了自动部署Kubernetes从节点的方法的一种实现架构图,如图6所示,该架构图在设置了管理程序以管理Kubernetes集群以及从节点资源池存放有多个云主机虚拟机的前提下,体现了自动部署Kubernetes从节点的整个过程。首先,管理程序接收到用户的请求信息,并解析出请求信息中的主节点标识和节点参数,根据主节点标识查找到对应的Kubernetes集群,根据节点参数从数据库中查找到对应的待部署节点标识,然后管理程序调用控制服务器,控制服务器从从节点资源池的多个云主机虚拟机中确定与待部署节点标识的云主机虚拟机,并对该云主机虚拟机进行部署,将其作为Kubernetes从节点添加入Kubernetes集群。FIG. 6 shows an implementation architecture diagram of a method for automatically deploying a Kubernetes slave node. As shown in FIG. 6, the architecture diagram has a hypervisor configured to manage a Kubernetes cluster and a slave node resource pool to store multiple cloud host virtual machines. The premise reflects the entire process of automatically deploying Kubernetes from a node. First, the management program receives the request information of the user, and parses the primary node identifier and the node parameter in the request information, finds the corresponding Kubernetes cluster according to the primary node identifier, and finds the corresponding node identifier to be deployed from the database according to the node parameter. And then the hypervisor invokes the control server, and the control server determines the cloud host virtual machine identified from the node to be deployed from the plurality of cloud host virtual machines of the node resource pool, and deploys the cloud host virtual machine as a Kubernetes Nodes are added to the Kubernetes cluster.
通过图1所示实施例可知,在本申请实施例中,通过获取用户的请求信息,请求信息中包含主节点标识和节点参数,首先确定Kubernetes从节点添加的对象,即与主节点标识对应的Kubernetes集群,再从数据库中获取与节点参数对应的待部署节点标识,最后通过控制服务器查找出与待部署节点标识对应的待部署节点,并将该待部署节点部署为Kubernetes集群的Kubernetes从节点,减少了人工操作,通过搭建自动化部署,提升了Kubernetes从节点的部署效率。The embodiment shown in FIG. 1 shows that, in the embodiment of the present application, by acquiring the request information of the user, the request information includes the primary node identifier and the node parameter, and firstly, the object added by the Kubernetes slave node, that is, the identifier corresponding to the master node identifier is determined. The Kubernetes cluster obtains the node identifier to be deployed corresponding to the node parameter from the database, and finally finds the node to be deployed corresponding to the node identifier to be deployed through the control server, and deploys the node to be deployed as the Kubernetes slave node of the Kubernetes cluster. Reduced manual operations and improved deployment efficiency of Kubernetes slave nodes by building automated deployments.
请参阅图2,图2是本申请实施例二提供的一种自动部署Kubernetes从节点的方法的实现流程图。相对于图1对应的实施例,本实施例对S102之后的过程进行细化后得到S201~S203,详述如下:Referring to FIG. 2, FIG. 2 is a flowchart of an implementation method of automatically deploying a Kubernetes slave node according to Embodiment 2 of the present application. With respect to the embodiment corresponding to FIG. 1, the embodiment refines the process after S102 to obtain S201~S203, which are as follows:
S201:获取从节点资源池内多个可用的节点的多个可用节点标识,并将所述待部署节点标识与所述多个可用节点标识进行比对。S201: Acquire multiple available node identifiers of multiple available nodes in the node resource pool, and compare the to-be-deployed node identifier with the multiple available node identifiers.
在本申请实施例中,为了便于解释,只针对从节点资源池内存放的节点为云主机虚拟机的情况进行说明,但应获知的是,从节点资源池也可存放其他如物理节点的节点,以应用于本申请实施例中。由于节点参数包含在请求信息中,而请求信息是根据用户对某个云主机虚拟机进行选择从而自动生成的,其中,用户可以通过编写代码指定某个云主机虚拟机进行选择,但在本申请实施例中,更优地,是用户在前述的基于Kubernetes集群和从节点资源池的前端页面的基础上,通过对前端页面展示的某个云主机虚拟机进行点击操作,从而完成选择,后台则自动生成对应的请求信息。而在用户进行云主机虚拟机的选择时,存在无法获知云主机虚拟机的状态的可能,比如前端页面并未根据Kubernetes集群和从节点资源池的状态实时更新,导致请求信息生成时,请求信息中节点参数对应的待部署节点标识所在的云主机虚拟机可能已经加入主节点标识对应的Kubernetes集群或其他Kubernetes集群。故在从数据库中获取到节点参数对应的待部署节点标识后,获取从节点资源池中多个可用的节点的多个可用节点标识,并将待部署节点标识与多个可用节点标识进行比对,其中,多个可用的节点是指从节点资源池中的多个未加入Kubernetes集群并且处于运行状态的云主机虚拟机。In the embodiment of the present application, for convenience of explanation, only the case where the node stored in the node resource pool is a cloud host virtual machine is described, but it should be known that the slave node resource pool may also store other nodes such as physical nodes. It is used in the embodiment of the present application. Since the node parameter is included in the request information, and the request information is automatically generated according to the user selecting a cloud host virtual machine, the user can specify a cloud host virtual machine to select by writing a code, but in the present application In an embodiment, the user selects a cloud host virtual machine displayed on the front-end page by clicking on the front-end page of the Kubernetes cluster and the slave node resource pool, thereby completing the selection, and the background is completed. The corresponding request information is automatically generated. When the user selects the cloud host virtual machine, there is a possibility that the state of the cloud host virtual machine cannot be known. For example, the front-end page is not updated in real time according to the status of the Kubernetes cluster and the slave node resource pool, and the request information is generated when the request information is generated. The cloud host VM to which the node ID of the node to be deployed corresponds to the node parameter may have joined the Kubernetes cluster or other Kubernetes cluster corresponding to the master node identifier. After obtaining the node identifier to be deployed corresponding to the node parameter from the database, the plurality of available node identifiers of the plurality of available nodes in the node resource pool are obtained, and the node identifier to be deployed is compared with the plurality of available node identifiers. Wherein, the plurality of available nodes refers to a plurality of cloud host virtual machines in the slave resource pool that are not joined to the Kubernetes cluster and are in a running state.
可选地,更新从节点资源池的多个可用节点标识,并将多个可用节点标识存入数据库。在本申请实施例中,更新从节点资源池中的节点状态,进而更新多个可用的节点和对应的多个可用节点标识,根据实际需求,更新可为实时更新,也可设置时间间隔进行更新。更新后,将多个可用节点标识存入数据库,存入方式可以是在数据库新建专门用于存放可用节点标识的状态数据表。在从数据库获取待部署节点标识后,再从数据库获取多个可用节点标识,减少了操作复杂度,提升了待部署节点选择的准确性。Optionally, the plurality of available node identities of the slave node resource pool are updated, and the plurality of available node identities are stored in the database. In the embodiment of the present application, the status of the node in the slave node resource pool is updated, and then multiple available nodes and corresponding multiple available node identifiers are updated. According to actual requirements, the update may be real-time update, and the time interval may be updated. . After the update, multiple available node identifiers are stored in the database, and the deposit mode may be to newly create a state data table dedicated to storing the available node identifiers in the database. After obtaining the node identifier to be deployed from the database, multiple available node identifiers are obtained from the database, which reduces the operation complexity and improves the accuracy of the node selection to be deployed.
S202:若所述待部署节点标识与所述多个可用节点标识的某一个比对成功,则将比对成功的可用节点标识对应的节点作为所述待部署节点。S202: If the comparison between the node identifier to be deployed and the one of the multiple available node identifiers is successful, the node corresponding to the successfully available node identifier is used as the node to be deployed.
多个可用节点标识获取完成后,将待部署节点标识与多个可用节点标识进行比对,如果待部署节点标识与多个可用节点标识的某一个比对成功,则证明待部署节点标识对应的云主机虚拟机可用,则将比对成功的可用节点标识对应的节点(云主机虚拟机)作为所述待部署节点。After the acquisition of the plurality of available node identifiers is completed, the node identifier to be deployed is compared with the plurality of available node identifiers. If the node identifier to be deployed is successfully matched with the one of the plurality of available node identifiers, the node identifier corresponding to the node to be deployed is proved to be corresponding. If the cloud host virtual machine is available, the node corresponding to the successfully available node identifier (the cloud host virtual machine) is used as the node to be deployed.
S203:若所述待部署节点标识与所述多个可用节点标识都比对失败,则向所述用户输出报错提示。S203: If the node identifier to be deployed and the plurality of available node identifiers fail to match, output an error prompt to the user.
如果待部署节点标识和多个可用节点标识都比对失败,则证明待部署节点标识对应的云主机虚拟机不可用,则将向用户输出报错提示,并且停止执行调用控制服务器,以查找与所述待部署节点标识对应的待部署节点及其后续的操作。其中,报错提示可包含与待部署节点标识对应的云主机虚拟机,方便用户对该云主机虚拟机进行查看等操作。If the node identifier to be deployed and the multiple available node identifiers fail to match, it is proved that the cloud host virtual machine corresponding to the node identifier to be deployed is unavailable, and an error prompt is output to the user, and the call control server is stopped to find and locate The node to be deployed corresponding to the deployed node identifier and its subsequent operations are described. The error prompt may include a cloud host virtual machine corresponding to the node identifier to be deployed, so that the user can view the cloud host virtual machine.
通过图2所示实施例可知,在本申请实施例中,通过获取从节点资源池中多个可用的节点的多个可用节点标识,并将待部署节点标识与多个可用节点标识进行比对,从而判断待部署节点标识的有效性,若待部署节点标识与多个可用节点标识的某一个比对成功,则待部署节点标识有效,将比对成功的可用节点标识对应的节点作为待部署节点;若待部署节点标识与多个可用节点标识都比对失败,则待部署节点标识无效,向用户输出报错提示,在确定待部署节点之前进行待部署节点标识的有效性验证,防止了资源浪费,在有效性验证通过后,保证了待部署节点标识对应的节点处于可用状态。According to the embodiment shown in FIG. 2, in the embodiment of the present application, multiple available node identifiers of multiple available nodes in the node resource pool are obtained, and the node identifier to be deployed is compared with multiple available node identifiers. And determining the validity of the node identifier to be deployed. If the node identifier to be deployed is successfully matched with the one of the plurality of available node identifiers, the node identifier to be deployed is valid, and the node corresponding to the successfully available node identifier is compared as the node to be deployed. If the node ID of the node to be deployed fails to be compared with the number of available node identifiers, the node ID of the node to be deployed is invalid, and an error message is sent to the user. The validity of the node ID to be deployed is verified before the node to be deployed is determined. Waste, after the validity verification is passed, the node corresponding to the node ID to be deployed is guaranteed to be available.
请参阅图3,图3是本申请实施例三提供的一种自动部署Kubernetes从节点的方法的实现流程图。相对于图1对应的实施例,本实施例在待部署节点安装有代理客户端的基础上,对S103进行细化后得到S301~S302,详述如下:Referring to FIG. 3, FIG. 3 is a flowchart of an implementation of a method for automatically deploying a Kubernetes slave node according to Embodiment 3 of the present application. With respect to the embodiment corresponding to FIG. 1, in this embodiment, on the basis that the node to be deployed is installed with the proxy client, the S103 is refined to obtain S301~S302, which are as follows:
S301:自动向所述控制服务器发送与所述请求信息对应的部署指令。S301: Automatically send a deployment instruction corresponding to the request information to the control server.
在从节点资源池中与待部署节点标识对应的待部署节点(云主机虚拟机)上,预先安装代理客户端(Agent),用于操作该待部署节点。由于需要将待部署节点部署为Kubernetes集群的Kubernetes从节点,故Kubernetes集群的管理程序在收到来自用户的请求信息后,解析出请求信息中的内容,并调用控制服务器的控制接口,向该控制服务器发送与请求信息对应的部署指令,优选地,该部署指令以HTTPS的格式进行发送。On the node to be deployed (the cloud host VM) corresponding to the node ID to be deployed, the agent client (Agent) is pre-installed to operate the node to be deployed. Since the node to be deployed needs to be deployed as a Kubernetes slave node of the Kubernetes cluster, the management program of the Kubernetes cluster parses the content of the request information after receiving the request information from the user, and calls the control interface of the control server to the control. The server sends a deployment instruction corresponding to the request information, and preferably, the deployment instruction is sent in an HTTPS format.
S302:将所述控制服务器与所述代理客户端建立套接字连接,以使所述控制服务器在接收到所述部署指令后,使所述代理客户端根据所述部署指令将所述待部署节点部署为所述Kubernetes集群的所述Kubernetes从节点。S302: Establish a socket connection between the control server and the proxy client, so that after receiving the deployment instruction, the control server causes the proxy client to deploy the proxy according to the deployment instruction. The node is deployed as the Kubernetes slave node of the Kubernetes cluster.
在本申请实施例中,使控制服务器与代理客户端建立套接字(socket)连接,套接字实质上提供了通信的端点,故在实现通信之前,双方首先各自创建一个端点,即控制服务器套接字和代理客户端套接字。套接字连接的建立过程主要分为三个步骤,即控制服务器监听、代理客户端请求以及连接确认。在控制服务器监听过程中,控制服务器套接字处于等待连接状态,实时监控网络中的连接请求,并不查找具体的代理客户端套接字;在代理客户端请求过程中,待部署节点上安装的代理客户端套接字先获取控制服务器套接字的相关信息,包括控制服务器套接字的地址和端口号,然后向控制服务器套接字发送连接请求;在连接确认过程中,控制服务器套接字成功监听到代理客户端套接字的连接请求,并响应该连接请求,将控制服务器套接字的相关信息发送至代理客户端套接字,最后在代理客户端套接字确认控制服务器套接字的相关信息后,完成套接字连接的建立。In the embodiment of the present application, the control server establishes a socket connection with the proxy client, and the socket substantially provides the endpoint of the communication, so before the communication is implemented, the two parties first create an endpoint, that is, the control server. Socket and proxy client sockets. The establishment process of a socket connection is mainly divided into three steps, namely, controlling server monitoring, proxy client request, and connection confirmation. During the control server listening process, the control server socket is in a waiting state, monitoring the connection request in the network in real time, and not looking for a specific proxy client socket; in the proxy client request process, the node to be deployed is installed. The proxy client socket first obtains information about the control server socket, including controlling the address and port number of the server socket, and then sending a connection request to the control server socket; during the connection confirmation process, the control server set The connection successfully listens to the connection request of the proxy client socket, and in response to the connection request, sends information about the control server socket to the proxy client socket, and finally the proxy client socket control server After the socket information is related, the establishment of the socket connection is completed.
控制服务器与代理客户端建立套接字连接后,基于接收到的部署指令,对代理客户端进行控制,使代理客户端根据部署指令在其下的待部署节点进行部署操作,将待部署节点添加为Kubernetes集群的Kubernetes从节点,具体地,代理客户端执行shell命令以进行部署操作。After the control server establishes a socket connection with the proxy client, the proxy client is controlled based on the received deployment instruction, so that the proxy client performs the deployment operation on the node to be deployed according to the deployment instruction, and the node to be deployed is added. The Kubernetes slave node of the Kubernetes cluster, in particular, the proxy client executes shell commands for deployment operations.
通过图3所示实施例可知,在本申请实施例中,通过自动向控制服务器发送与请求信息对应的部署指令,并使控制服务器与代理客户端建立套接字连接,以使控制服务器在接收到部署指令后,控制代理客户端根据部署指令将待部署节点部署为Kubernetes集群的Kubernetes从节点,通过预先安装的代理客户端执行部署操作,进一步提升了部署Kubernetes从节点的自动化程度,并且通过控制服务器与代理客户端之间建立的套接字连接,提升了部署操作的稳定性。As shown in the embodiment shown in FIG. 3, in the embodiment of the present application, the deployment instruction corresponding to the request information is automatically sent to the control server, and the control server establishes a socket connection with the proxy client, so that the control server receives the connection. After the deployment instruction, the control agent client deploys the node to be deployed as the Kubernetes slave node of the Kubernetes cluster according to the deployment instruction, and performs the deployment operation through the pre-installed proxy client, thereby further improving the automation degree of deploying the Kubernetes slave node, and controlling through A socket connection established between the server and the proxy client improves the stability of the deployment operation.
请参阅图4,图4是本申请实施例四提供的一种自动查找物流信息的方法的实现流程图。相对于图3对应的实施例,本实施例在部署指令包含认证签名值的基础上,对S302之前的过程进行细化后得到S401~S403,详述如下:Referring to FIG. 4, FIG. 4 is a flowchart of an implementation of a method for automatically searching for logistics information according to Embodiment 4 of the present application. With respect to the embodiment corresponding to FIG. 3, in this embodiment, on the basis that the deployment instruction includes the authentication signature value, the process before S302 is refined to obtain S401~S403, which are as follows:
S401:从所述部署指令中提取除所述认证签名值外的参数,并利用参数签名算法计算出所述参数的计算签名值。S401: Extract parameters other than the authentication signature value from the deployment instruction, and calculate a calculated signature value of the parameter by using a parameter signature algorithm.
为了验证控制服务器是否拥有执行部署指令的权限,在本申请实施例中,对控制服务器进行鉴权操作。首先,管理程序在发送部署指令前,提取出部署指令中的参数,并进行排序,并基于排序后的参数和自定义的字符串token,通过参数签名算法计算出认证签名值。优选地,参数签名算法为结合了哈希消息认证码(Hash-based Message Authentication Code,HMAC)和安全哈希算法(Secure Hash Algorithm,SHA)的算法,即为HAMC-SHA1签名认证算法,计算出的认证签名值即为基于排序后的参数和token的签名摘要。认证签名值生成后,将认证签名值添加至部署指令,并将添加后的部署指令发送至控制服务器。控制服务器收到添加后的部署指令后,提取出除认证签名值外的参数,并对参数进行排序,并基于排序后的参数和同样的字符串token,通过HAMC-SHA1签名认证算法计算出计算签名值。In order to verify whether the control server has the right to execute the deployment instruction, in the embodiment of the present application, the control server performs an authentication operation. First, before sending the deployment instruction, the hypervisor extracts the parameters in the deployment instruction and sorts them, and calculates the authentication signature value through the parameter signature algorithm based on the sorted parameters and the customized string token. Preferably, the parameter signature algorithm is combined with a hash message authentication code (Hash-based) The message authentication code (HMAC) and the Secure Hash Algorithm (SHA) algorithm are the HAMC-SHA1 signature authentication algorithm. The calculated authentication signature value is the signature summary based on the sorted parameters and the token. After the authentication signature value is generated, the authentication signature value is added to the deployment instruction, and the added deployment instruction is sent to the control server. After receiving the added deployment command, the control server extracts parameters other than the authentication signature value, sorts the parameters, and calculates the calculation based on the sorted parameters and the same string token through the HAMC-SHA1 signature authentication algorithm. Signature value.
S402:若所述认证签名值与所述计算签名值相等,则执行所述将所述控制服务器与所述代理客户端建立套接字连接的操作。S402: If the authentication signature value is equal to the calculated signature value, perform the operation of establishing a socket connection between the control server and the proxy client.
计算完成后,控制服务器将计算签名值与认证签名值进行比对,若计算签名值与认证签名值相等,则控制服务器鉴权通过,继续执行将控制服务器与代理客户端建立套接字连接的操作。After the calculation is completed, the control server compares the calculated signature value with the authentication signature value. If the calculated signature value is equal to the authentication signature value, the control server authenticates and continues to perform the socket connection between the control server and the proxy client. operating.
S403:若所述认证签名值与所述计算签名值不相等,则停止执行后续操作。S403: If the authentication signature value is not equal to the calculated signature value, stopping performing the subsequent operation.
若计算签名值和认证签名值不相等,则鉴权失败,证明控制服务器并未拥有执行部署指令的权限,则停止执行后续操作。If the calculated signature value and the authentication signature value are not equal, the authentication fails, and the control server does not have the authority to execute the deployment instruction, and then the subsequent operations are stopped.
通过图4所示实施例可知,本申请实施例在建立套接字连接之前进行认证,提升了连接的安全性。It can be seen from the embodiment shown in FIG. 4 that the embodiment of the present application performs authentication before establishing a socket connection, thereby improving the security of the connection.
请参阅图5,图5是本申请实施例五提供的一种自动部署Kubernetes从节点的方法的实现流程图。相对于图1~图4对应的任一个实施例,本实施例对S103进行细化后得到S501~S503,详述如下:Referring to FIG. 5, FIG. 5 is a flowchart of an implementation method for automatically deploying a Kubernetes slave node according to Embodiment 5 of the present application. With respect to any of the embodiments corresponding to FIG. 1 to FIG. 4, in this embodiment, S103 is refined to obtain S501~S503, which are as follows:
S501:从文件服务器中获取与所述Kubernetes集群相关的二进制文件。S501: Obtain a binary file related to the Kubernetes cluster from a file server.
在确定待部署节点后,首先对待部署节点的状态进行检查,判断其是否具备部署条件。在检查到待部署节点就绪后,为了搭建待部署节点的部署环境,首先从文件服务器中获取与Kubernetes集群相关的二进制文件,其中,文件服务器是独立于Kubernetes集群的高速下载服务器,用于存放二进制文件及各类脚本。After determining the node to be deployed, first check the status of the deployed node to determine whether it has deployment conditions. After checking that the node to be deployed is ready, in order to set up the deployment environment of the node to be deployed, first obtain the binary file related to the Kubernetes cluster from the file server, wherein the file server is a high-speed download server independent of the Kubernetes cluster, and is used for storing the binary. Documents and various scripts.
S502:获取所述二进制文件中的从节点服务文件,并获取所述待部署节点的节点地址。S502: Acquire a slave service file in the binary file, and obtain a node address of the node to be deployed.
通常来说,从文件服务器获取的二进制文件包括与Kubernetes主节点相关的主节点服务文件和Kubernetes从节点相关的从节点服务文件等,其中,主节点服务文件包括接口服务组件文件和调度组件文件等,而从节点服务文件包括Kubectl文件、Kubelet文件和Kube-Proxy文件等。故在本申请实施例中,从二进制文件中获取从节点服务器文件,并获取待部署节点的节点地址,以便进行部署。Generally, the binary file obtained from the file server includes a master node service file related to the Kubernetes master node and a slave node service file related to the Kubernetes slave node, wherein the master node service file includes an interface service component file and a scheduling component file, and the like. The slave service files include Kubectl files, Kubelet files, and Kube-Proxy files. Therefore, in the embodiment of the present application, the slave node server file is obtained from the binary file, and the node address of the node to be deployed is obtained for deployment.
可选地,在将二进制文件存入文件服务器时,对主节点服务文件和从节点服务文件分别设置主节点标识和从节点标识。在对从节点服务器文件设置从节点标识,并且存入文件服务器后,在后续需要对待部署节点进行部署时,根据从节点标识从文件服务器中直接获取从节点服务文件,而不需要获取包含主节点服务文件和从节点服务文件的二进制文件,节省了对二进制文件的提取操作,提升了部署效率。Optionally, when the binary file is stored in the file server, the master node identifier and the slave node identifier are respectively set for the master node service file and the slave node service file. After the slave node identifier is set to the slave node server and stored in the file server, when the node to be deployed is deployed later, the slave node service file is directly obtained from the file server according to the slave node identifier, and the master node is not acquired. The service file and the binary file of the slave service file save the extraction of the binary file and improve the deployment efficiency.
S503:自动基于所述从节点服务文件和所述节点地址配置从节点服务。S503: Automatically configure the slave node service based on the slave node service file and the node address.
从节点服务文件及节点地址获取完毕后,对待部署节点进行清理,主要清理待部署节点原有的配置。清理完毕后,生成与Kubernetes集群相关的证书,主要包括认证机构(Certification Authority,CA)证书、kubernetes证书、admin证书以及proxy证书,在上述证书生成时,还生成与上述证书对应的密钥文件,后续进行密钥文件的分发,具体分发过程在后文进行阐述。待部署节点清理完成和证书生成后,首先基于从节点服务文件配置并启动flannel服务,flannel是针对Kubernetes的网络规划服务,可使Kubernetes集群中的不同Kubernetes从节点创建的docker容器都具有全Kubernetes集群唯一的虚拟IP地址,在另一方面,flannel实质上是覆盖网络(overlay network),即是将传输控制协议(Transmission Control Protocol,TCP)数据包封装在另一种网络包里面进行路由转发和通信。根据节点地址完成对flannel服务的配置,并启动flannel服务后,进行对容器的配置,修改容器的启动参数,并启动容器。After the node service file and the node address are obtained, the node to be deployed is cleaned. The original configuration of the node to be deployed is cleared. After the cleaning is completed, a certificate related to the Kubernetes cluster is generated, which mainly includes a certification authority (CA) certificate, a kubernetes certificate, an admin certificate, and a proxy certificate. When the above certificate is generated, a key file corresponding to the above certificate is also generated. Subsequent distribution of the key file, the specific distribution process will be described later. After the node is cleaned up and the certificate is generated, the flannel service is configured and started based on the slave service file. The flannel is the network planning service for Kubernetes, so that the docker containers created by the different Kubernetes slave nodes in the Kubernetes cluster have the full Kubernetes cluster. The only virtual IP address, on the other hand, the flannel is essentially the overlay network, that is, the transmission control protocol (Transmission) Control Protocol, TCP) Packets are encapsulated in another network packet for routing and communication. After the configuration of the flannel service is completed according to the node address, and the flannel service is started, the configuration of the container is performed, the startup parameters of the container are modified, and the container is started.
在Kubernetes集群中,为了便于控制容器,进一步自动配置Kubectl服务、Kubelet服务和Kube-Proxy服务。其中,Kubectl是Kubernetes集群管理的控制台工具,可向用户提供大量命令,方便用户对Kubernetes集群进行查看。Kubelet是Kubernetes从节点上的容器管理工具,用于处理Kubernetes主节点下发至该Kubernetes从节点的任务,Kubelet会在Kubernetes主节点的接口服务组件上注册Kubernetes从节点的信息,并定期向Kubernetes主节点发送该Kubernetes从节点的资源使用情况,同时监控Kubernetes从节点内部的容器和节点资源。而Kube-Proxy是Kubernetes服务的入口组件,用于管理Kubernetes服务的访问入口。基于从节点服务文件和待部署节点的节点地址,依次部署并启动Kubectl服务、Kubelet服务以及Kube-Proxy服务,值得一提的是,将CA证书和admin证书对应密钥文件分发至Kubectl服务,将CA证书对应密钥文件分发至Kubelet服务,将CA证书和proxy证书对应密钥文件分发至Kube-Proxy服务,最后当Kube-Proxy服务部署成功并启动完成后,即成功将待部署节点部署为Kubernetes从节点。In the Kubernetes cluster, in order to facilitate the control of the container, the Kubectl service, Kubelet service and Kube-Proxy service are further automatically configured. Among them, Kubectl is a console tool for Kubernetes cluster management, which can provide users with a large number of commands for users to view Kubernetes cluster. Kubelet is a container management tool on the Kubernetes slave node, which is used to process the task that the Kubernetes master node delivers to the Kubernetes slave node. Kubelet registers the Kubernetes slave node information on the interface service component of the Kubernetes master node, and periodically reports to the Kubernetes master. The node sends the resource usage of the Kubernetes slave node, while monitoring the container and node resources inside the Kubernetes slave node. Kube-Proxy is the entry component of the Kubernetes service and is used to manage access to the Kubernetes service. Based on the node service file and the node address of the node to be deployed, the Kubectl service, the Kubelet service, and the Kube-Proxy service are sequentially deployed and started. It is worth mentioning that the CA certificate and the admin certificate corresponding key file are distributed to the Kubectl service. The CA certificate corresponding key file is distributed to the Kubelet service, and the CA certificate and the proxy certificate corresponding key file are distributed to the Kube-Proxy service. Finally, when the Kube-Proxy service is successfully deployed and started, the node to be deployed is successfully deployed as Kubernetes. From the node.
可选地,在步骤S501至步骤S503的部署过程中,设置重试和记录机制。图7示出了在设置重试和记录机制后,部署待部署节点的实现流程图,x、y和z均是大于零的整数,如图7所示,在检查待部署节点状态、获取二进制文件、清理待部署节点、生成证书相关文件、配置并启动flannel服务、安装并启动容器、配置并启动Kubectl服务、配置并启动Kubelet服务以及配置并启动Kube-Proxy服务的各个环节设置重试和记录机制,并设置次数阈值,当某个环节成功后,进行下一环节。但当某个环节出现错误时,判断其是否为第x次尝试,若还未达到第x次尝试,则重新开展该环节;若为第x次尝试,则记录当前环节失败,并记录待部署节点在该环节部署失败,还可记录出错原因。在检查待部署节点状态环节,由于待部署节点可能正处于初始化过程,故为了提升部署过程的有效性,设置检查次数阈值为y,y大于x,并在待部署节点未就绪的次数未达到检查次数阈值y时,设置等待时间,在本申请实施例中为等待z分钟后,再次进入检查待部署节点状态的环节。在所有环节通过后,待部署节点部署完成,上述方法通过重试和记录机制提升了待部署节点部署过程中的容错性,并通过记录失败的环节,方便开发人员定位问题。值得一提的是,图7所示流程图仅为示例,在实际应用场景中,可根据实际情况自由设置每个环节的次数阈值及等待时间。Optionally, in the deployment process of steps S501 to S503, a retry and recording mechanism is set. FIG. 7 is a flowchart showing an implementation of deploying a node to be deployed after setting a retry and recording mechanism. Each of x, y, and z is an integer greater than zero. As shown in FIG. 7, the state of the node to be deployed is checked, and the binary is obtained. Files, clean up the nodes to be deployed, generate certificate related files, configure and start the flannel service, install and start the container, configure and start the Kubectl service, configure and start the Kubelet service, and configure and start the Kube-Proxy service. Mechanism, and set the threshold of the number of times, when a link is successful, the next step. However, when there is an error in a certain link, it is judged whether it is the xth attempt. If the xth attempt has not been reached, the link is re-opened; if it is the xth attempt, the current link is recorded and the record is to be deployed. The node fails to deploy at this point and can also record the cause of the error. In the process of checking the status of the node to be deployed, the node to be deployed may be in the initialization process. Therefore, to improve the effectiveness of the deployment process, the threshold of the number of inspections is y, y is greater than x, and the number of times the node to be deployed is not ready does not reach the check. When the threshold value y is set, the waiting time is set. In the embodiment of the present application, after waiting for z minutes, the link to check the status of the node to be deployed is re-entered. After all the links are passed, the deployment of the node to be deployed is completed. The above method improves the fault tolerance of the node to be deployed through the retry and record mechanism, and facilitates the developer to locate the problem by recording the failed link. It is worth mentioning that the flowchart shown in FIG. 7 is only an example. In the actual application scenario, the threshold of the number of times and the waiting time of each link can be freely set according to actual conditions.
对应于上文实施例所述的一种自动部署Kubernetes从节点的方法,图8示出了本申请实施例提供的一种自动部署Kubernetes从节点的装置的一个结构框图,参照图8,该装置包括:Corresponding to a method for automatically deploying a Kubernetes slave node according to the above embodiment, FIG. 8 is a structural block diagram of an apparatus for automatically deploying a Kubernetes slave node according to an embodiment of the present application. Referring to FIG. 8, the device is provided. include:
获取单元81,用于获取用户的请求信息,所述请求信息包括主节点标识和节点参数,所述节点参数用于指示物理节点或虚拟机节点;The obtaining unit 81 is configured to acquire request information of the user, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
查找单元82,用于查找与所述主节点标识对应的Kubernetes集群,并从数据库中获取与所述节点参数对应的待部署节点标识;The searching unit 82 is configured to search for a Kubernetes cluster corresponding to the primary node identifier, and obtain, from the database, a node identifier to be deployed corresponding to the node parameter;
部署单元83,用于调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点。The deployment unit 83 is configured to invoke a control server to find a node to be deployed corresponding to the node identifier to be deployed, and deploy the node to be deployed as a Kubernetes slave node of the Kubernetes cluster.
可选地,查找单元82还包括:Optionally, the searching unit 82 further includes:
标识获取单元,用于获取从节点资源池内多个可用的节点的多个可用节点标识,并将所述待部署节点标识与所述多个可用节点标识进行比对;An identifier obtaining unit, configured to acquire a plurality of available node identifiers of the plurality of available nodes in the node resource pool, and compare the node identifier to be deployed with the plurality of available node identifiers;
节点确定单元,用于若所述待部署节点标识与所述多个可用节点标识的某一个比对成功,则将比对成功的可用节点标识对应的节点作为所述待部署节点;a node determining unit, configured to use, as the node to be deployed, a node corresponding to the successfully available node identifier, if the comparison between the node identifier to be deployed and the identifier of the multiple available node identifiers is successful;
输出单元,用于若所述待部署节点标识与所述多个可用节点标识都比对失败,则向所述用户输出报错提示。And an output unit, configured to output an error prompt to the user if the comparison between the node identifier to be deployed and the multiple available node identifiers fails.
可选地,若待部署节点安装有代理客户端,部署单元83包括:Optionally, if the node to be deployed is installed with a proxy client, the deployment unit 83 includes:
指令发送单元,用于自动向所述控制服务器发送与所述请求信息对应的部署指令;An instruction sending unit, configured to automatically send a deployment instruction corresponding to the request information to the control server;
连接建立单元,用于将所述控制服务器与所述代理客户端建立套接字连接,以使所述控制服务器在接收到所述部署指令后,使代理客户端根据所述部署指令将所述待部署节点部署为所述Kubernetes集群的所述Kubernetes从节点。a connection establishing unit, configured to establish a socket connection between the control server and the proxy client, so that after receiving the deployment instruction, the control server causes the proxy client to perform the The node to be deployed is deployed as the Kubernetes slave node of the Kubernetes cluster.
可选地,若部署指令包含认证签名值,连接建立单元还包括:Optionally, if the deployment instruction includes the authentication signature value, the connection establishing unit further includes:
计算单元,用于从所述部署指令中提取除所述认证签名值外的参数,并利用参数签名算法计算出所述参数的计算签名值;a calculating unit, configured to extract a parameter other than the authentication signature value from the deployment instruction, and calculate a calculated signature value of the parameter by using a parameter signature algorithm;
执行单元,用于若所述认证签名值与所述计算签名值相等,则执行所述将所述控制服务器与所述代理客户端建立套接字连接的操作;An execution unit, configured to perform an operation of establishing a socket connection between the control server and the proxy client if the authentication signature value is equal to the calculated signature value;
停止执行单元,用于若所述认证签名值与所述计算签名值不相等,则停止执行后续操作。Stopping the execution unit, if the verification signature value is not equal to the calculated signature value, stopping performing the subsequent operation.
可选地,部署单元83包括:Optionally, the deployment unit 83 includes:
文件获取单元,用于从文件服务器中获取与所述Kubernetes集群相关的二进制文件;a file obtaining unit, configured to acquire a binary file related to the Kubernetes cluster from a file server;
地址获取单元,用于获取所述二进制文件中的从节点服务文件,并获取所述待部署节点的节点地址;An address obtaining unit, configured to acquire a slave node service file in the binary file, and obtain a node address of the node to be deployed;
服务配置单元,用于自动基于所述从节点服务文件和所述节点地址配置从节点服务。a service configuration unit, configured to automatically configure a slave node service based on the slave node service file and the node address.
图9是本申请实施例提供的终端设备的示意图。如图9所示,该实施例的终端设备9包括:处理器90以及存储器91,所述存储器91中存储有可在所述处理器90上运行的计算机可读指令92,例如部署Kubernetes从节点的程序。所述处理器90执行所述计算机可读指令92时实现上述各个自动部署Kubernetes从节点的方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,所述处理器90执行所述计算机可读指令92时实现上述装置实施例中各单元的功能,例如图8所示单元81至83的功能。FIG. 9 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in FIG. 9, the terminal device 9 of this embodiment includes a processor 90 and a memory 91 in which computer readable instructions 92 executable on the processor 90 are stored, for example, a Kubernetes slave node is deployed. program of. The steps in the method embodiment of implementing the above-described various automatic deployment Kubernetes slave nodes when the processor 90 executes the computer readable instructions 92, such as steps S101 to S103 shown in FIG. Alternatively, the processor 90, when executing the computer readable instructions 92, implements the functions of the various units of the apparatus embodiments described above, such as the functions of units 81 through 83 of FIG.
示例性的,所述计算机可读指令92可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器91中,并由所述处理器90执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令92在所述终端设备9中的执行过程。例如,所述计算机可读指令92可以被分割成获取单元、查找单元及部署单元,各单元具体功能如上所述。Illustratively, the computer readable instructions 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90, To complete this application. The one or more modules/units may be a series of computer readable instruction segments capable of performing a particular function, the instruction segments being used to describe the execution of the computer readable instructions 92 in the terminal device 9. For example, the computer readable instructions 92 can be partitioned into an acquisition unit, a lookup unit, and a deployment unit, each unit having a specific function as described above.
所述终端设备可包括,但不仅限于,处理器90、存储器91。本领域技术人员可以理解,图9仅仅是终端设备9的示例,并不构成对终端设备9的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。The terminal device may include, but is not limited to, a processor 90 and a memory 91. It will be understood by those skilled in the art that FIG. 9 is only an example of the terminal device 9, does not constitute a limitation of the terminal device 9, may include more or less components than those illustrated, or combine some components, or different components. For example, the terminal device may further include an input/output device, a network access device, a bus, and the like.
所称处理器90可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 90 can be a central processing unit (Central Processing Unit, CPU), can also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
所述存储器91可以是所述终端设备9的内部存储单元,例如终端设备9的硬盘或内存。所述存储器91也可以是所述终端设备9的外部存储设备,例如所述终端设备9上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器91还可以既包括所述终端设备9的内部存储单元也包括外部存储设备。所述存储器91用于存储所述计算机可读指令以及所述终端设备所需的其他程序和数据。所述存储器91还可以用于暂时地存储已经输出或者将要输出的数据。The memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9. The memory 91 may also be an external storage device of the terminal device 9, for example, a plug-in hard disk equipped on the terminal device 9, a smart memory card (SMC), and a secure digital (SD). Card, flash card, etc. Further, the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device. The memory 91 is configured to store the computer readable instructions and other programs and data required by the terminal device. The memory 91 can also be used to temporarily store data that has been output or is about to be output.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。The above embodiments are only used to explain the technical solutions of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that they can still The technical solutions described in the embodiments are modified, or the equivalents of the technical features are replaced by the equivalents. The modifications and substitutions of the embodiments do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (20)

  1. 一种自动部署Kubernetes从节点的方法,其特征在于,包括:A method for automatically deploying a Kubernetes slave node, comprising:
    获取用户的请求信息,所述请求信息包括主节点标识和节点参数,所述节点参数用于指示物理节点或虚拟机节点;Obtaining request information of the user, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
    查找与所述主节点标识对应的Kubernetes集群,并从数据库中获取与所述节点参数对应的待部署节点标识;Finding a Kubernetes cluster corresponding to the primary node identifier, and obtaining, from the database, a node identifier to be deployed corresponding to the node parameter;
    调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点。The control server is invoked to find a node to be deployed corresponding to the node identifier to be deployed, and the node to be deployed is deployed as a Kubernetes slave node of the Kubernetes cluster.
  2. 如权利要求1所述的方法,其特征在于,所述从数据库中获取与所述节点参数对应的待部署节点标识之后,还包括:The method of claim 1, wherein the obtaining the node identifier to be deployed corresponding to the node parameter from the database further comprises:
    获取从节点资源池内多个可用的节点的多个可用节点标识,并将所述待部署节点标识与所述多个可用节点标识进行比对;Obtaining a plurality of available node identifiers of the plurality of available nodes in the node resource pool, and comparing the to-be-deployed node identifiers with the plurality of available node identifiers;
    若所述待部署节点标识与所述多个可用节点标识的某一个比对成功,则将比对成功的可用节点标识对应的节点作为所述待部署节点;If the node identifier to be deployed is successfully matched with the one of the plurality of available node identifiers, the node corresponding to the successfully available node identifier is used as the node to be deployed;
    若所述待部署节点标识与所述多个可用节点标识都比对失败,则向所述用户输出报错提示。If the node identifier to be deployed and the plurality of available node identifiers fail to match, an error prompt is output to the user.
  3. 如权利要求1所述的方法,其特征在于,若所述待部署节点安装有代理客户端,所述调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点,包括:The method according to claim 1, wherein if the node to be deployed is installed with a proxy client, the call control server searches for a node to be deployed corresponding to the node identifier to be deployed, and the The node to be deployed is deployed as a Kubernetes slave of the Kubernetes cluster, including:
    自动向所述控制服务器发送与所述请求信息对应的部署指令;Automatically transmitting a deployment instruction corresponding to the request information to the control server;
    将所述控制服务器与所述代理客户端建立套接字连接,以使所述控制服务器在接收到所述部署指令后,使代理客户端根据所述部署指令将所述待部署节点部署为所述Kubernetes集群的所述Kubernetes从节点。Establishing a socket connection with the proxy client, so that the control server, after receiving the deployment instruction, causes the proxy client to deploy the node to be deployed according to the deployment instruction. The Kubernetes slave node of the Kubernetes cluster.
  4. 如权利要求3所述的方法,其特征在于,若所述部署指令包含认证签名值,所述将所述控制服务器与所述代理客户端建立套接字连接之前,还包括:The method of claim 3, wherein, before the deploying the command includes the authentication signature value, the method further comprises: before the controlling server establishes a socket connection with the proxy client,
    从所述部署指令中提取除所述认证签名值外的参数,并利用参数签名算法计算出所述参数的计算签名值;Extracting parameters other than the authentication signature value from the deployment instruction, and calculating a calculated signature value of the parameter by using a parameter signature algorithm;
    若所述认证签名值与所述计算签名值相等,则执行所述将所述控制服务器与所述代理客户端建立套接字连接的操作;And if the authentication signature value is equal to the calculated signature value, performing an operation of establishing a socket connection between the control server and the proxy client;
    若所述认证签名值与所述计算签名值不相等,则停止执行后续操作。If the authentication signature value is not equal to the calculated signature value, the subsequent operations are stopped.
  5. 如权利要求1至4任一项所述的方法,其特征在于,所述将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点,包括:The method according to any one of claims 1 to 4, wherein the deploying the node to be deployed as a Kubernetes slave node of the Kubernetes cluster comprises:
    从文件服务器中获取与所述Kubernetes集群相关的二进制文件;Obtaining binary files related to the Kubernetes cluster from a file server;
    获取所述二进制文件中的从节点服务文件,并获取所述待部署节点的节点地址;Obtaining a slave node service file in the binary file, and acquiring a node address of the node to be deployed;
    自动基于所述从节点服务文件和所述节点地址配置从节点服务。The slave node service is automatically configured based on the slave node service file and the node address.
  6. 一种自动部署Kubernetes从节点的装置,其特征在于,包括:A device for automatically deploying a Kubernetes slave node, comprising:
    获取单元,用于获取用户的请求信息,所述请求信息包括主节点标识和节点参数,所述节点参数用于指示物理节点或虚拟机节点;An obtaining unit, configured to acquire request information of a user, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
    查找单元,用于查找与所述主节点标识对应的Kubernetes集群,并从数据库中获取与所述节点参数对应的待部署节点标识;a search unit, configured to search for a Kubernetes cluster corresponding to the primary node identifier, and obtain, from the database, a node identifier to be deployed corresponding to the node parameter;
    部署单元,用于调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点。a deployment unit, configured to invoke a control server to find a node to be deployed corresponding to the node identifier to be deployed, and deploy the node to be deployed as a Kubernetes slave node of the Kubernetes cluster.
  7. 如权利要求6所述的装置,其特征在于,所述查找单元,还包括:The device according to claim 6, wherein the searching unit further comprises:
    标识获取单元,用于获取从节点资源池内多个可用的节点的多个可用节点标识,并将所述待部署节点标识与所述多个可用节点标识进行比对;An identifier obtaining unit, configured to acquire a plurality of available node identifiers of the plurality of available nodes in the node resource pool, and compare the node identifier to be deployed with the plurality of available node identifiers;
    节点确定单元,用于若所述待部署节点标识与所述多个可用节点标识的某一个比对成功,则将比对成功的可用节点标识对应的节点作为所述待部署节点;a node determining unit, configured to use, as the node to be deployed, a node corresponding to the successfully available node identifier, if the comparison between the node identifier to be deployed and the identifier of the multiple available node identifiers is successful;
    输出单元,用于若所述待部署节点标识与所述多个可用节点标识都比对失败,则向所述用户输出报错提示。And an output unit, configured to output an error prompt to the user if the comparison between the node identifier to be deployed and the multiple available node identifiers fails.
  8. 如权利要求6所述的装置,其特征在于,若所述待部署节点安装有代理客户端,所述部署单元,包括:The device according to claim 6, wherein if the node to be deployed is installed with a proxy client, the deployment unit includes:
    指令发送单元,用于自动向所述控制服务器发送与所述请求信息对应的部署指令;An instruction sending unit, configured to automatically send a deployment instruction corresponding to the request information to the control server;
    连接建立单元,用于将所述控制服务器与所述代理客户端建立套接字连接,以使所述控制服务器在接收到所述部署指令后,使代理客户端根据所述部署指令将所述待部署节点部署为所述Kubernetes集群的所述Kubernetes从节点。a connection establishing unit, configured to establish a socket connection between the control server and the proxy client, so that after receiving the deployment instruction, the control server causes the proxy client to perform the The node to be deployed is deployed as the Kubernetes slave node of the Kubernetes cluster.
  9. 如权利要求7所述的装置,其特征在于,若所述部署指令包含认证签名值,所述连接建立单元,还包括:The device according to claim 7, wherein the connection establishing unit further comprises: if the deployment instruction includes an authentication signature value,
    计算单元,用于从所述部署指令中提取除所述认证签名值外的参数,并利用参数签名算法计算出所述参数的计算签名值;a calculating unit, configured to extract a parameter other than the authentication signature value from the deployment instruction, and calculate a calculated signature value of the parameter by using a parameter signature algorithm;
    执行单元,用于若所述认证签名值与所述计算签名值相等,则执行所述将所述控制服务器与所述代理客户端建立套接字连接的操作;An execution unit, configured to perform an operation of establishing a socket connection between the control server and the proxy client if the authentication signature value is equal to the calculated signature value;
    停止执行单元,用于若所述认证签名值与所述计算签名值不相等,则停止执行后续操作。Stopping the execution unit, if the verification signature value is not equal to the calculated signature value, stopping performing the subsequent operation.
  10. 如权利要求6至9任一项所述的装置,其特征在于,所述部署单元,包括:The device according to any one of claims 6 to 9, wherein the deployment unit comprises:
    文件获取单元,用于从文件服务器中获取与所述Kubernetes集群相关的二进制文件;a file obtaining unit, configured to acquire a binary file related to the Kubernetes cluster from a file server;
    地址获取单元,用于获取所述二进制文件中的从节点服务文件,并获取所述待部署节点的节点地址;An address obtaining unit, configured to acquire a slave node service file in the binary file, and obtain a node address of the node to be deployed;
    服务配置单元,用于自动基于所述从节点服务文件和所述节点地址配置从节点服务。a service configuration unit, configured to automatically configure a slave node service based on the slave node service file and the node address.
  11. 一种终端设备,其特征在于,包括存储器以及处理器,所述存储器中存储有可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:A terminal device, comprising: a memory and a processor, wherein the memory stores computer readable instructions executable on the processor, and the processor implements the following steps when the computer readable instructions are executed :
    获取用户的请求信息,所述请求信息包括主节点标识和节点参数,所述节点参数用于指示物理节点或虚拟机节点;Obtaining request information of the user, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
    查找与所述主节点标识对应的Kubernetes集群,并从数据库中获取与所述节点参数对应的待部署节点标识;Finding a Kubernetes cluster corresponding to the primary node identifier, and obtaining, from the database, a node identifier to be deployed corresponding to the node parameter;
    调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点。The control server is invoked to find a node to be deployed corresponding to the node identifier to be deployed, and the node to be deployed is deployed as a Kubernetes slave node of the Kubernetes cluster.
  12. 根据权利要求11所述的终端设备,其特征在于,所述从数据库中获取与所述节点参数对应的待部署节点标识之后,还包括:The terminal device according to claim 11, wherein after the obtaining the node identifier to be deployed corresponding to the node parameter from the database, the method further includes:
    获取从节点资源池内多个可用的节点的多个可用节点标识,并将所述待部署节点标识与所述多个可用节点标识进行比对;Obtaining a plurality of available node identifiers of the plurality of available nodes in the node resource pool, and comparing the to-be-deployed node identifiers with the plurality of available node identifiers;
    若所述待部署节点标识与所述多个可用节点标识的某一个比对成功,则将比对成功的可用节点标识对应的节点作为所述待部署节点;If the node identifier to be deployed is successfully matched with the one of the plurality of available node identifiers, the node corresponding to the successfully available node identifier is used as the node to be deployed;
    若所述待部署节点标识与所述多个可用节点标识都比对失败,则向所述用户输出报错提示。If the node identifier to be deployed and the plurality of available node identifiers fail to match, an error prompt is output to the user.
  13. 根据权利要求11所述的终端设备,其特征在于,若所述待部署节点安装有代理客户端,所述调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点,包括:The terminal device according to claim 11, wherein if the node to be deployed is installed with a proxy client, the call control server searches for a node to be deployed corresponding to the node identifier to be deployed, and The deployed node is deployed as a Kubernetes slave of the Kubernetes cluster, including:
    自动向所述控制服务器发送与所述请求信息对应的部署指令;Automatically transmitting a deployment instruction corresponding to the request information to the control server;
    将所述控制服务器与所述代理客户端建立套接字连接,以使所述控制服务器在接收到所述部署指令后,使代理客户端根据所述部署指令将所述待部署节点部署为所述Kubernetes集群的所述Kubernetes从节点。Establishing a socket connection with the proxy client, so that the control server, after receiving the deployment instruction, causes the proxy client to deploy the node to be deployed according to the deployment instruction. The Kubernetes slave node of the Kubernetes cluster.
  14. 根据权利要求13所述的终端设备,其特征在于,若所述部署指令包含认证签名值,所述将所述控制服务器与所述代理客户端建立套接字连接之前,还包括:The terminal device according to claim 13, wherein, if the deployment instruction includes an authentication signature value, before the establishing the socket connection between the control server and the proxy client, the method further includes:
    从所述部署指令中提取除所述认证签名值外的参数,并利用参数签名算法计算出所述参数的计算签名值;Extracting parameters other than the authentication signature value from the deployment instruction, and calculating a calculated signature value of the parameter by using a parameter signature algorithm;
    若所述认证签名值与所述计算签名值相等,则执行所述将所述控制服务器与所述代理客户端建立套接字连接的操作;And if the authentication signature value is equal to the calculated signature value, performing an operation of establishing a socket connection between the control server and the proxy client;
    若所述认证签名值与所述计算签名值不相等,则停止执行后续操作。If the authentication signature value is not equal to the calculated signature value, the subsequent operations are stopped.
  15. 根据权利要求11至14任一项所述的终端设备,其特征在于,所述将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点,包括:The terminal device according to any one of claims 11 to 14, wherein the deploying the node to be deployed as a Kubernetes slave node of the Kubernetes cluster comprises:
    从文件服务器中获取与所述Kubernetes集群相关的二进制文件;Obtaining binary files related to the Kubernetes cluster from a file server;
    获取所述二进制文件中的从节点服务文件,并获取所述待部署节点的节点地址;Obtaining a slave node service file in the binary file, and acquiring a node address of the node to be deployed;
    自动基于所述从节点服务文件和所述节点地址配置从节点服务。The slave node service is automatically configured based on the slave node service file and the node address.
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被至少一个处理器执行时实现如下步骤:A computer readable storage medium storing computer readable instructions, wherein the computer readable instructions, when executed by at least one processor, implement the following steps:
    获取用户的请求信息,所述请求信息包括主节点标识和节点参数,所述节点参数用于指示物理节点或虚拟机节点;Obtaining request information of the user, where the request information includes a primary node identifier and a node parameter, where the node parameter is used to indicate a physical node or a virtual machine node;
    查找与所述主节点标识对应的Kubernetes集群,并从数据库中获取与所述节点参数对应的待部署节点标识;Finding a Kubernetes cluster corresponding to the primary node identifier, and obtaining, from the database, a node identifier to be deployed corresponding to the node parameter;
    调用控制服务器,以查找与所述待部署节点标识对应的待部署节点,并将所述待部署节点部署为所述Kubernetes集群的Kubernetes从节点。The control server is invoked to find a node to be deployed corresponding to the node identifier to be deployed, and the node to be deployed is deployed as a Kubernetes slave node of the Kubernetes cluster.
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述计算机可读指令被至少一个处理器执行时还实现如下步骤:The computer readable storage medium of claim 16, wherein the computer readable instructions are further executed by the at least one processor to:
    获取从节点资源池内多个可用的节点的多个可用节点标识,并将所述待部署节点标识与所述多个可用节点标识进行比对;Obtaining a plurality of available node identifiers of the plurality of available nodes in the node resource pool, and comparing the to-be-deployed node identifiers with the plurality of available node identifiers;
    若所述待部署节点标识与所述多个可用节点标识的某一个比对成功,则将比对成功的可用节点标识对应的节点作为所述待部署节点;If the node identifier to be deployed is successfully matched with the one of the plurality of available node identifiers, the node corresponding to the successfully available node identifier is used as the node to be deployed;
    若所述待部署节点标识与所述多个可用节点标识都比对失败,则向所述用户输出报错提示。If the node identifier to be deployed and the plurality of available node identifiers fail to match, an error prompt is output to the user.
  18. 根据权利要求16所述的计算机可读存储介质,其特征在于,若所述待部署节点安装有代理客户端,所述计算机可读指令被至少一个处理器执行时实现如下步骤:The computer readable storage medium according to claim 16, wherein if the node to be deployed is installed with a proxy client, the computer readable instructions are executed by at least one processor to implement the following steps:
    自动向所述控制服务器发送与所述请求信息对应的部署指令;Automatically transmitting a deployment instruction corresponding to the request information to the control server;
    将所述控制服务器与所述代理客户端建立套接字连接,以使所述控制服务器在接收到所述部署指令后,使代理客户端根据所述部署指令将所述待部署节点部署为所述Kubernetes集群的所述Kubernetes从节点。Establishing a socket connection with the proxy client, so that the control server, after receiving the deployment instruction, causes the proxy client to deploy the node to be deployed according to the deployment instruction. The Kubernetes slave node of the Kubernetes cluster.
  19. 根据权利要求18所述的计算机可读存储介质,其特征在于,若所述部署指令包含认证签名值,所述计算机可读指令被至少一个处理器执行时还实现如下步骤:The computer readable storage medium of claim 18, wherein if the deployment instruction includes an authentication signature value, the computer readable instructions are further executed by the at least one processor to:
    从所述部署指令中提取除所述认证签名值外的参数,并利用参数签名算法计算出所述参数的计算签名值;Extracting parameters other than the authentication signature value from the deployment instruction, and calculating a calculated signature value of the parameter by using a parameter signature algorithm;
    若所述认证签名值与所述计算签名值相等,则执行所述将所述控制服务器与所述代理客户端建立套接字连接的操作;And if the authentication signature value is equal to the calculated signature value, performing an operation of establishing a socket connection between the control server and the proxy client;
    若所述认证签名值与所述计算签名值不相等,则停止执行后续操作。If the authentication signature value is not equal to the calculated signature value, the subsequent operations are stopped.
  20. 根据权利要求16至19任一项所述的计算机可读存储介质,其特征在于,所述计算机可读指令被至少一个处理器执行时实现如下步骤:The computer readable storage medium according to any one of claims 16 to 19, wherein the computer readable instructions are executed by at least one processor to implement the following steps:
    从文件服务器中获取与所述Kubernetes集群相关的二进制文件;Obtaining binary files related to the Kubernetes cluster from a file server;
    获取所述二进制文件中的从节点服务文件,并获取所述待部署节点的节点地址;Obtaining a slave node service file in the binary file, and acquiring a node address of the node to be deployed;
    自动基于所述从节点服务文件和所述节点地址配置从节点服务。The slave node service is automatically configured based on the slave node service file and the node address.
PCT/CN2018/097564 2018-03-30 2018-07-27 Method for automatically deploying kubernetes worker node, device, terminal apparatus, and readable storage medium WO2019184164A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810277483.8 2018-03-30
CN201810277483.8A CN108549580B (en) 2018-03-30 2018-03-30 Method for automatically deploying Kubernets slave nodes and terminal equipment

Publications (1)

Publication Number Publication Date
WO2019184164A1 true WO2019184164A1 (en) 2019-10-03

Family

ID=63517533

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097564 WO2019184164A1 (en) 2018-03-30 2018-07-27 Method for automatically deploying kubernetes worker node, device, terminal apparatus, and readable storage medium

Country Status (2)

Country Link
CN (1) CN108549580B (en)
WO (1) WO2019184164A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193783A (en) * 2019-12-19 2020-05-22 新浪网技术(中国)有限公司 Service access processing method and device
CN111651275A (en) * 2020-06-04 2020-09-11 山东汇贸电子口岸有限公司 MySQL cluster automatic deployment system and method
CN111782766A (en) * 2020-06-30 2020-10-16 福建健康之路信息技术有限公司 Method and system for retrieving all resources in Kubernetes cluster through keywords
CN111930466A (en) * 2020-05-28 2020-11-13 武汉达梦数据库有限公司 Kubernetes-based data synchronization environment deployment method and device
CN112099911A (en) * 2020-08-28 2020-12-18 中国—东盟信息港股份有限公司 Method for constructing dynamic resource access controller based on Kubernetes
CN112148429A (en) * 2020-09-22 2020-12-29 江苏银承网络科技股份有限公司 Information processing method and device for managing container arrangement engine cluster
CN113110917A (en) * 2021-04-28 2021-07-13 北京链道科技有限公司 Data discovery and security access method based on Kubernetes
CN113190239A (en) * 2021-05-20 2021-07-30 洛阳轴承研究所有限公司 Method for rapid deployment of industrial application
CN113360160A (en) * 2020-03-05 2021-09-07 北京沃东天骏信息技术有限公司 Method and device for deploying application, electronic equipment and storage medium
CN113377346A (en) * 2021-06-10 2021-09-10 北京滴普科技有限公司 Integrated environment building method and device, electronic equipment and storage medium
CN113778331A (en) * 2021-08-12 2021-12-10 联想凌拓科技有限公司 Data processing method, main node and storage medium
CN113965582A (en) * 2020-07-20 2022-01-21 中移(苏州)软件技术有限公司 Mode conversion method and system, and storage medium
CN114006815A (en) * 2020-07-13 2022-02-01 中移(苏州)软件技术有限公司 Automatic deployment method and device for cloud platform nodes, nodes and storage medium
CN114124903A (en) * 2021-11-15 2022-03-01 新华三大数据技术有限公司 Virtual IP address management method and device
CN114138754A (en) * 2021-12-09 2022-03-04 安超云软件有限公司 Software deployment method and device based on Kubernetes platform
CN114697985A (en) * 2020-12-28 2022-07-01 中国联合网络通信集团有限公司 Wireless operation and maintenance system registration method and device, electronic equipment and storage medium
CN114760292A (en) * 2020-12-25 2022-07-15 广东飞企互联科技股份有限公司 Service discovery and registration oriented method and device
CN114884880A (en) * 2022-04-06 2022-08-09 阿里巴巴(中国)有限公司 Data transmission method and system
US20230060053A1 (en) * 2021-08-20 2023-02-23 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of deploying a cluster, and storage medium
US12026561B2 (en) 2020-08-27 2024-07-02 Cisco Technology, Inc. Dynamic authentication and authorization of a containerized process

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109445904B (en) * 2018-09-30 2020-08-04 咪咕文化科技有限公司 Information processing method and device and computer storage medium
CN109462508B (en) * 2018-11-30 2021-06-01 北京百度网讯科技有限公司 Node deployment method, device and storage medium
CN111865630B (en) * 2019-04-26 2023-03-24 北京达佳互联信息技术有限公司 Topological information acquisition method, device, terminal and storage medium
CN110196843B (en) * 2019-05-17 2023-08-08 腾讯科技(深圳)有限公司 File distribution method based on container cluster and container cluster
WO2021003729A1 (en) * 2019-07-11 2021-01-14 深圳市大疆创新科技有限公司 Configuration method, physical device, server and computer readable storage medium
CN110531987A (en) * 2019-07-30 2019-12-03 平安科技(深圳)有限公司 Management method, device and computer readable storage medium based on Kubernetes cluster
CN110798375B (en) * 2019-09-29 2021-10-01 烽火通信科技股份有限公司 Monitoring method, system and terminal equipment for enhancing high availability of container cluster
CN110912827B (en) * 2019-11-22 2021-08-13 北京金山云网络技术有限公司 Route updating method and user cluster
CN112968919B (en) * 2019-12-12 2023-05-30 上海欣诺通信技术股份有限公司 Data processing method, device, equipment and storage medium
CN111259072B (en) * 2020-01-08 2023-11-14 广州虎牙科技有限公司 Data synchronization method, device, electronic equipment and computer readable storage medium
CN113918273B (en) * 2020-07-10 2023-07-18 华为技术有限公司 Method and device for creating container group
CN112162857A (en) * 2020-09-24 2021-01-01 珠海格力电器股份有限公司 Cluster server node management system
CN112241314B (en) * 2020-10-29 2022-08-09 浪潮通用软件有限公司 Multi-Kubernetes cluster management method and device and readable medium
CN114443059A (en) * 2020-10-30 2022-05-06 中国联合网络通信集团有限公司 Kubernets cluster deployment method, device and equipment
CN112199167A (en) * 2020-11-05 2021-01-08 成都精灵云科技有限公司 High-availability method for multi-machine rapid one-key deployment based on battlefield environment
CN114579250B (en) * 2020-12-02 2024-08-06 腾讯科技(深圳)有限公司 Method, device and storage medium for constructing virtual cluster
CN114650293B (en) * 2020-12-17 2024-02-23 中移(苏州)软件技术有限公司 Method, device, terminal and computer storage medium for flow diversion
CN112286560B (en) * 2020-12-30 2021-04-23 博智安全科技股份有限公司 Method and system for automatically deploying and upgrading distributed storage cluster
CN112637037B (en) * 2021-03-10 2021-06-18 北京瑞莱智慧科技有限公司 Cross-region container communication system, method, storage medium and computer equipment
CN113127150B (en) * 2021-03-18 2023-10-17 同盾控股有限公司 Rapid deployment method and device of cloud primary system, electronic equipment and storage medium
CN113138717B (en) * 2021-04-09 2022-11-11 锐捷网络股份有限公司 Node deployment method, device and storage medium
CN113064600B (en) * 2021-04-20 2022-12-02 支付宝(杭州)信息技术有限公司 Method and device for deploying application
CN113347049B (en) * 2021-08-04 2021-12-07 统信软件技术有限公司 Server cluster deployment method and device, computing equipment and storage medium
CN114124703B (en) * 2021-11-26 2024-01-23 浪潮卓数大数据产业发展有限公司 Multi-environment service configuration method, equipment and medium based on Kubernetes
CN116340416A (en) * 2021-12-22 2023-06-27 中兴通讯股份有限公司 Database deployment method, database processing method, related equipment and storage medium
CN114936898B (en) * 2022-05-16 2023-04-18 广州高专资讯科技有限公司 Management system, method, equipment and storage medium based on spot supply
CN115396437B (en) * 2022-08-24 2023-06-13 中电金信软件有限公司 Cluster building method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105393220A (en) * 2013-05-15 2016-03-09 思杰系统有限公司 Systems and methods for deploying a spotted virtual server in a cluster system
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
US20170346683A1 (en) * 2016-05-24 2017-11-30 Futurewei Technologies, Inc. Automated Generation of Deployment Workflows for Cloud Platforms Based on Logical Stacks
CN107426034A (en) * 2017-08-18 2017-12-01 国网山东省电力公司信息通信公司 A kind of extensive container scheduling system and method based on cloud platform

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107645396B (en) * 2016-07-21 2020-11-13 北京金山云网络技术有限公司 Cluster capacity expansion method and device
CN106506233A (en) * 2016-12-01 2017-03-15 郑州云海信息技术有限公司 A kind of automatic deployment Hadoop clusters and the method for flexible working node
CN107766157A (en) * 2017-11-02 2018-03-06 山东浪潮云服务信息科技有限公司 Distributed container cluster framework implementation method based on domestic CPU and OS

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105393220A (en) * 2013-05-15 2016-03-09 思杰系统有限公司 Systems and methods for deploying a spotted virtual server in a cluster system
US20170346683A1 (en) * 2016-05-24 2017-11-30 Futurewei Technologies, Inc. Automated Generation of Deployment Workflows for Cloud Platforms Based on Logical Stacks
CN106850621A (en) * 2017-02-07 2017-06-13 南京云创大数据科技股份有限公司 A kind of method based on container cloud fast construction Hadoop clusters
CN107426034A (en) * 2017-08-18 2017-12-01 国网山东省电力公司信息通信公司 A kind of extensive container scheduling system and method based on cloud platform

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111193783A (en) * 2019-12-19 2020-05-22 新浪网技术(中国)有限公司 Service access processing method and device
CN113360160A (en) * 2020-03-05 2021-09-07 北京沃东天骏信息技术有限公司 Method and device for deploying application, electronic equipment and storage medium
CN111930466A (en) * 2020-05-28 2020-11-13 武汉达梦数据库有限公司 Kubernetes-based data synchronization environment deployment method and device
CN111651275A (en) * 2020-06-04 2020-09-11 山东汇贸电子口岸有限公司 MySQL cluster automatic deployment system and method
CN111782766A (en) * 2020-06-30 2020-10-16 福建健康之路信息技术有限公司 Method and system for retrieving all resources in Kubernetes cluster through keywords
CN114006815A (en) * 2020-07-13 2022-02-01 中移(苏州)软件技术有限公司 Automatic deployment method and device for cloud platform nodes, nodes and storage medium
CN114006815B (en) * 2020-07-13 2024-01-26 中移(苏州)软件技术有限公司 Automatic deployment method and device for cloud platform nodes, nodes and storage medium
CN113965582A (en) * 2020-07-20 2022-01-21 中移(苏州)软件技术有限公司 Mode conversion method and system, and storage medium
CN113965582B (en) * 2020-07-20 2024-04-09 中移(苏州)软件技术有限公司 Mode conversion method and system, and storage medium
US12026561B2 (en) 2020-08-27 2024-07-02 Cisco Technology, Inc. Dynamic authentication and authorization of a containerized process
CN112099911B (en) * 2020-08-28 2024-02-13 中国—东盟信息港股份有限公司 Method for constructing dynamic resource access controller based on Kubernetes
CN112099911A (en) * 2020-08-28 2020-12-18 中国—东盟信息港股份有限公司 Method for constructing dynamic resource access controller based on Kubernetes
CN112148429A (en) * 2020-09-22 2020-12-29 江苏银承网络科技股份有限公司 Information processing method and device for managing container arrangement engine cluster
CN112148429B (en) * 2020-09-22 2024-05-28 江苏银承网络科技股份有限公司 Information processing method and device for managing container orchestration engine cluster
CN114760292B (en) * 2020-12-25 2023-07-21 广东飞企互联科技股份有限公司 Service discovery and registration-oriented method and device
CN114760292A (en) * 2020-12-25 2022-07-15 广东飞企互联科技股份有限公司 Service discovery and registration oriented method and device
CN114697985A (en) * 2020-12-28 2022-07-01 中国联合网络通信集团有限公司 Wireless operation and maintenance system registration method and device, electronic equipment and storage medium
CN113110917B (en) * 2021-04-28 2024-03-15 北京链道科技有限公司 Data discovery and security access method based on Kubernetes
CN113110917A (en) * 2021-04-28 2021-07-13 北京链道科技有限公司 Data discovery and security access method based on Kubernetes
CN113190239A (en) * 2021-05-20 2021-07-30 洛阳轴承研究所有限公司 Method for rapid deployment of industrial application
CN113190239B (en) * 2021-05-20 2024-05-24 洛阳轴承研究所有限公司 Method for rapidly deploying industrial application
CN113377346A (en) * 2021-06-10 2021-09-10 北京滴普科技有限公司 Integrated environment building method and device, electronic equipment and storage medium
CN113778331A (en) * 2021-08-12 2021-12-10 联想凌拓科技有限公司 Data processing method, main node and storage medium
CN113778331B (en) * 2021-08-12 2024-06-07 联想凌拓科技有限公司 Data processing method, master node and storage medium
US20230060053A1 (en) * 2021-08-20 2023-02-23 Beijing Baidu Netcom Science Technology Co., Ltd. Method and apparatus of deploying a cluster, and storage medium
CN114124903A (en) * 2021-11-15 2022-03-01 新华三大数据技术有限公司 Virtual IP address management method and device
CN114138754A (en) * 2021-12-09 2022-03-04 安超云软件有限公司 Software deployment method and device based on Kubernetes platform
WO2023193671A1 (en) * 2022-04-06 2023-10-12 阿里巴巴(中国)有限公司 Data transmission method and system
CN114884880A (en) * 2022-04-06 2022-08-09 阿里巴巴(中国)有限公司 Data transmission method and system
CN114884880B (en) * 2022-04-06 2024-03-08 阿里巴巴(中国)有限公司 Data transmission method and system

Also Published As

Publication number Publication date
CN108549580B (en) 2023-04-14
CN108549580A (en) 2018-09-18

Similar Documents

Publication Publication Date Title
WO2019184164A1 (en) Method for automatically deploying kubernetes worker node, device, terminal apparatus, and readable storage medium
US10735329B2 (en) Container communication method and system for parallel applications
US10700947B2 (en) Life cycle management method and device for network service
WO2019184116A1 (en) Method and device for automatically building kubernetes main node, terminal device and computer-readable storage medium
US10353728B2 (en) Method, system and device for managing virtual machine software in cloud environment
CN104734931B (en) Link establishing method and device between a kind of virtual network function
US9276953B2 (en) Method and apparatus to detect and block unauthorized MAC address by virtual machine aware network switches
CN104718723A (en) A framework for networking and security services in virtual networks
CN105656646A (en) Deploying method and device for virtual network element
KR20060051932A (en) Updating software while it is running
CN106911648B (en) Environment isolation method and equipment
WO2017066931A1 (en) Method and device for managing certificate in network function virtualization architecture
CN110855488B (en) Virtual machine access method and device
US11444785B2 (en) Establishment of trusted communication with container-based services
WO2017185992A1 (en) Method and apparatus for transmitting request message
CN112035062B (en) Migration method of local storage of cloud computing, computer equipment and storage medium
CN110890987A (en) Method, device, equipment and system for automatically creating cluster
CN113923023A (en) Authority configuration and data processing method, device, electronic equipment and medium
CN113595832A (en) Network data acquisition system and method
CN104484221A (en) Method for taking over existing vCenter cluster by CloudStack
CN116680045A (en) Distributed multi-device data acquisition method and system
JP2023040221A (en) Provider network service extensions
CN106844058B (en) Management method and device for virtualized resources
KR20150137766A (en) System and method for creating stack of virtual machine
US20240069981A1 (en) Managing events for services of a cloud platform in a hybrid cloud environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18911681

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18911681

Country of ref document: EP

Kind code of ref document: A1