CN114938375A - Container group updating equipment and container group updating method - Google Patents

Container group updating equipment and container group updating method Download PDF

Info

Publication number
CN114938375A
CN114938375A CN202210528978.XA CN202210528978A CN114938375A CN 114938375 A CN114938375 A CN 114938375A CN 202210528978 A CN202210528978 A CN 202210528978A CN 114938375 A CN114938375 A CN 114938375A
Authority
CN
China
Prior art keywords
container group
node
group
container
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210528978.XA
Other languages
Chinese (zh)
Other versions
CN114938375B (en
Inventor
杨彦存
赵贝
矫恒浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202210528978.XA priority Critical patent/CN114938375B/en
Publication of CN114938375A publication Critical patent/CN114938375A/en
Application granted granted Critical
Publication of CN114938375B publication Critical patent/CN114938375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms

Abstract

The disclosure relates to a container group updating device and a container group updating method, and relates to the technical field of internet. The container group update apparatus includes: a controller configured to: creating a first container group on a first node, wherein the first container group and a second container group existing in a server cluster are different version container groups aiming at target service; after the first container group is created, generating container group change information to trigger deletion of route forwarding information of a second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, and the second container group is a container group running on a second node; acquiring the flow distribution state of the second container group from the load balancing information; and deleting the second container group when the flow distribution state indicates that the flow is not distributed to the second container group any more.

Description

Container group updating equipment and container group updating method
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a container group update apparatus and a container group update method.
Background
The kubernets system is called as a K8s system for short, the K8s system is an open-source container orchestration engine, and supports automatic Deployment and large-scale scalable application containerization management, the K8s system is used for managing containerized applications on multiple hosts in a cloud platform, when a Deployment (Deployment) rolling upgrade process is executed in the K8s system, a new container group (Pod) is created first, and then an old Pod is deleted, and in the process of deleting the old Pod, two asynchronous operations exist: one operation is to delete the route forwarding information of the old Pod, and the other operation is to delete the old Pod. Because the two operations are executed asynchronously, the order of the two asynchronous operations cannot be controlled, so that there may be a case where the route forwarding information of the old Pod is deleted first and then the old Pod is deleted, and when such a case occurs, a case where the network traffic is forwarded to the deleted old Pod may occur, so that a problem of connection timeout may occur.
Disclosure of Invention
In order to solve the above technical problem or at least partially solve the above technical problem, the present disclosure provides a container group update apparatus and a container group update method, which can ensure that an old Pod is deleted after route forwarding information of the old Pod is deleted first.
In order to achieve the above purpose, the technical solutions provided by the embodiments of the present disclosure are as follows:
in a first aspect, a container group update apparatus is provided, which includes:
a controller configured to: creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are different version container groups aiming at target services;
after the first container group is created, generating container group change information to trigger deletion of routing forwarding information of a second container group from each node in the server cluster, and obtaining load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is a container group running on a second node, and the first node and the second node are any two same or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state representation does not distribute the flow to the second container group any more.
As an optional implementation manner of the embodiment of the present disclosure, the load balancing information includes traffic distribution information corresponding to the IP addresses of the first container group and the second container group, respectively;
the controller is specifically configured to:
when the first container group is created on the first node, allocating a first IP address to the first container group as the routing forwarding information of the first container group, and storing the routing forwarding information of the first container group on each node in the server cluster, so that the traffic corresponding to the target service is forwarded to the first container group according to the first IP address;
after the first container group is created, generating container group change information to trigger deletion of a second IP address of the second container group from each node in the server cluster, and acquiring a traffic distribution state corresponding to the second IP address from the load balancing information.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to:
and under the condition of receiving the indication message of deleting the second container group, acquiring the load balancing information of the group to which the second container group belongs.
As an optional implementation manner of the embodiment of the present disclosure, the second node is a working node in the server cluster, and the apparatus for updating a container group further includes:
a communicator configured to: realizing information interaction between the control node and the working node;
the controller is specifically configured to: sending, by the communicator, the indication message from the control node to the second node; and controlling the second node to respond to the indication message and acquire the load balancing information of the group to which the second container group belongs.
As an optional implementation manner of the embodiment of the present disclosure, the second node is a control node in the server cluster, and the apparatus for updating a container group further includes:
a user input interface configured to: receiving the indication message input by a user;
the controller is specifically configured to, in response to the indication message, acquire load balancing information of a group to which the second container group belongs.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to:
when the traffic distribution state representation of the second container group obtained last time still distributes traffic to the second container group, obtaining the load balancing information of the group to which the second container group belongs again, and obtaining the traffic distribution state of the second container group based on the load balancing information.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to:
when the traffic is still distributed to the second container group according to the traffic distribution state representation of the second container group obtained last time, waiting for a first time period, obtaining the load balancing information of the group to which the second container group belongs again after the first time period, and obtaining the traffic distribution state of the second container group based on the load balancing information.
As an optional implementation manner of the embodiment of the present disclosure, the controller is specifically configured to:
continuously and repeatedly acquiring load balancing information of a group to which the second container group belongs;
determining the flow distribution states of a plurality of second container groups according to the load balancing information obtained continuously for a plurality of times;
and deleting the second container group when the flow distribution states of the plurality of second container groups all represent that the flow is still distributed to the second container group.
In a second aspect, a container set updating method is provided, which includes:
creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are different version container groups aiming at target service;
after the first container group is created, generating container group change information to trigger deletion of routing forwarding information of a second container group from each node in the server cluster, and obtaining load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is a container group running on a second node, and the first node and the second node are any two same or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state representation does not distribute the flow to the second container group any more.
As an optional implementation manner of the embodiment of the present disclosure, the load balancing information includes traffic allocation information corresponding to the IP addresses of the first container group and the second container group, respectively;
the method further comprises the following steps:
when the first container group is created on the first node, allocating a first IP address to the first container group as the routing forwarding information of the first container group, and saving the routing forwarding information of the first container group on each node in the server cluster, so that the traffic corresponding to the target service is forwarded to the first container group according to the first IP address;
the generating of the container group change information to trigger deletion of the route forwarding information of the second container group from each node in the server cluster includes:
generating container group change information to trigger deletion of the second IP address of the second container group from each node in the server cluster;
the obtaining the traffic distribution state of the second container group from the load balancing information includes:
and acquiring the flow distribution state corresponding to the second IP address from the load balancing information.
In a third aspect, the present disclosure provides a computer-readable storage medium comprising: the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the container group updating method as shown in the second aspect.
In a fourth aspect, the present disclosure provides a computer program product comprising a computer program which, when run on a computer, causes the computer to implement the container group updating method as shown in the second aspect.
The embodiment of the present disclosure provides a container group update apparatus and a container group update method, where the container group update apparatus includes: a controller configured to: creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are different version container groups aiming at target service; after the first container group is created, generating container group change information to trigger deletion of route forwarding information of a second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is a container group running on the second node, and the first node and the second node are any two same or different nodes in the server cluster; acquiring the flow distribution state of the second container group from the load balancing information; and deleting the second container group when the flow distribution state indicates that the flow is not distributed to the second container group any more. Through the scheme, in a container group updating scene, a second container group used for replacing the second container group is created, and when route forwarding information of the second container group is deleted from each node in a server cluster, because load balancing information of a group where the second container group is located can be obtained and a traffic distribution state of the second container group is determined, the situation that the second container group distributes traffic can be judged, when the traffic distribution state represents that the traffic is not distributed to the second container group any more, it is indicated that the route forwarding information of the second container group is deleted from each node at the moment, and the second container group is deleted at the moment, the situation that network traffic is forwarded to the deleted old second container group does not occur any more, the problem of connection timeout does not exist, and therefore the performance of a system K8s in a process of executing a Delymomon rolling upgrade is improved, and realizing the rolling upgrade without perception of the user.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a diagram illustrating an example of a container group operating in a node according to an embodiment of the present disclosure;
fig. 2A is a schematic diagram of an architecture of a K8s cluster according to an embodiment of the present disclosure;
fig. 2B is an architectural schematic diagram of a relationship between a control node and a working node in a K8s cluster according to an embodiment of the present disclosure;
fig. 2C is a schematic flowchart of creating a new Pod in the K8s system shown in fig. 2A during execution of the deployment rolling upgrade according to the embodiment of the present disclosure;
fig. 2D is a schematic flowchart of deleting an old Pod in the K8s system shown in fig. 2A during execution of deployment rolling upgrade according to an embodiment of the present disclosure;
fig. 3 is a block diagram of a hardware configuration of a container group update apparatus 300 according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of a container group updating method provided by an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of deleting a first Pod according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another method for deleting a first Pod according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating a method for deleting a first Pod by setting a waiting time according to an embodiment of the disclosure;
FIG. 8 is a schematic diagram illustrating an execution timing sequence of two asynchronous operations according to an embodiment of the present disclosure;
FIG. 9 is a timing diagram illustrating another exemplary execution sequence of two asynchronous operations according to an embodiment of the present disclosure;
fig. 10 is a flowchart illustrating a method for deleting a first Pod through load balancing information according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
In order to better understand the solutions provided in the embodiments of the present disclosure, some related technologies related to the embodiments of the present disclosure are described below:
container group (Pod): is the smallest deployable unit in the K8s system. The container group represents an independently running instance of the K8s system, which may consist of a single container or several containers coupled together. The K8s system may include multiple nodes, and each node may run one or multiple container groups. Illustratively, as shown in fig. 1, an exemplary diagram for operating a container group in one node provided by the embodiment of the present disclosure is shown, two container groups, namely a container group a and a container group B, are operated in the node in fig. 1, where the container group a includes a container 1 and a container 2, and the container group B includes a container 3 and a container 4.
K8s system: the system is an open-source container orchestration engine which supports automatic deployment and large-scale scalable application containerization management, the K8s system is used for managing containerized applications on a plurality of hosts in a cloud platform, the K8s system can be applied to a server cluster, the server cluster can comprise a plurality of servers, the server cluster applying the K8s system is also called a K8s cluster, and each server can serve as a node.
For example, as shown in fig. 2A, an architecture diagram of a K8s cluster provided in the embodiment of the present disclosure can be seen that the K8s cluster includes a control Node (Master Node), a plurality of worker nodes (Woker nodes), and an Overlay Network (Overlay Network).
The Overlay Network generally realizes the load bearing applied to the Network under the condition that a basic Network is not modified in a large scale, can be separated from other Network services, and is mainly based on the basic Network technology based on Internet Protocol (IP). The Overlay Network is a virtual Network constructed on the existing physical Network, and the upper layer application is only related to the virtual Network. The Overlay Network mainly comprises three parts:
an edge device: refers to a device directly connected to a virtual machine;
a control plane: the system is mainly responsible for the establishment and maintenance of the virtual tunnel and the notification of host reachability information;
forwarding plane: and a physical network for bearing the Overlay message.
As shown in fig. 2A, the control node (Master node) includes:
database (etcd): the etcd is a distributed database storing data (KV) by Key-value pairs, generally called as a KV database, and is used for storing related data in a cluster.
Application Program Interface (API) service (API Server): APIServer is a unified portal to the cluster, operating in a software architecture style (RESTful) and handed to etcd storage (the only component that can access etcd). The API Server may provide authentication, authorization, access control, API registration and discovery, among other mechanisms. The API Server may be accessed through a command line tool (kubecect), a visualization panel (dashboard), or a Software Development Kit (SDK), etc.
Node scheduling (Scheduler): the Scheduler is used for selecting node application deployment.
And the controller manager is used for processing conventional background tasks in the cluster, one resource corresponds to one controller, and the state of the cluster is monitored at the same time, so that the consistency of the actual state and the final state is ensured.
As shown in fig. 2A, each work Node (Node) includes:
node management module (kubelet): and the kubel is equivalent to a Master Node which is sent to a Node representative, manages a local container and reports data to the API Server.
Container Runtime (Container Runtime): a Container Runtime is a Container Runtime environment in which multiple Pod can run, and a K8s cluster can support multiple Container Runtime environments and any software that implements a Container Runtime Interface (CRI).
Implementation Service (Service) abstraction component (kube-proxy): and the kube-proxy is used for realizing the k8s cluster communication and load balancing.
When Deployment (Deployment) rolling upgrade is performed in the K8s system, a new container group (Pod) is created first, and then an old Pod is deleted, and there are two asynchronous operations in the process of deleting the old Pod: one operation is to delete the route forwarding information of the old Pod, and the other operation is to delete the old Pod. Since the two operations are executed asynchronously, the order of the two asynchronous operations cannot be controlled, so that there may be a case where the route forwarding information of the old Pod is deleted first and then the old Pod is deleted.
As can be seen from the above fig. 2A, the nodes of the K8s cluster have two roles, which are control nodes and working nodes, respectively, and the relationship between the control nodes and the working nodes in the K8s cluster is as shown in the following fig. 2B, where the control nodes are responsible for the management control of the whole cluster, including the management control of multiple working nodes, and it should be noted that fig. 2B includes 3 working nodes: working node 1, working node 2, and working node 3 are illustrated as examples, and in practice, more or fewer working nodes may be included in the K8s cluster, and the embodiment of the present disclosure is not limited thereto.
In the embodiment of the present disclosure, the container group update device may be a server cluster; the container group update device may also be one node or multiple nodes in the server cluster, for example, the container group update device may be a device formed by the first node or the second node; the container group update device may also be a device for managing the server cluster independently of the server cluster. For example, the server cluster may be a K8s cluster as shown in fig. 2A, the first node and the second node may be the same node in the server cluster, or the first node and the second node may be two different nodes in the server cluster, for example, the first node and/or the second node may be any one of the control node and the plurality of working nodes shown in fig. 2A.
Since each node (host) in the server cluster usually runs multiple pods (equivalent to multiple virtual machines), in order to distinguish each Pod, an IP address needs to be allocated to each container group of each node to distinguish each Pod in the server cluster, and when a client wants to access a service corresponding to a certain Pod, a working node that is initially accessed by the client may not be a node that runs the Pod, and at this time, the accessed working node needs to know route forwarding information corresponding to the IP address of the Pod, and then traffic corresponding to an access request of the client can be forwarded to the Pod according to the route forwarding information. Then, in the current server cluster, the route forwarding information corresponding to the IP address allocated by the Pod is stored in each node, so that when any node receives an access request for a service corresponding to a Pod in another node, the route forwarding table can be queried to find the Pod corresponding to the service, and the traffic is forwarded to the Pod corresponding to the service. In order to implement such a scheme, it is necessary to store a consistent route forwarding table (iptables) on each node in the server cluster, where the route forwarding table records route forwarding information of different Pod in the server, and a rule for forwarding traffic can be obtained according to the route forwarding table.
In the process of performing Deployment (Deployment) rolling upgrade, a new Pod is created first, and an old Pod is deleted.
As shown in fig. 2C, a schematic flow chart for creating a new Pod in the K8s system shown in fig. 2A during the deployment rolling upgrade process is executed, and the process includes the following steps: 21. an application program service interface (API Server) receives a request to add a new Pod triggered by an administrator, which may be an administrator of the Server cluster. 22. The API Server stores the resource content corresponding to the new Pod to a database (etcd). 23. The etcd adds the new Pod to the scheduler's queue. 24. Node scheduling (Scheduler) determines the appropriate working node for the new Pod. 25. And notifies a node management module (kubel) on the determined worker node that a new Pod needs to be scheduled onto the worker node, the node management module on the worker node creating the new Pod. 26. The Container Network Interface (CNI) assigns an IP address to the new Pod. 27. A Container Storage Interface (CSI) allocates storage space for the new Pod. 28. The new Pod, i.e. the Pod, is created in the container runtime environment interface CRI. 29. After creating the new Pod, the node management module stores the IP address allocated to the new Pod in the etcd. 30. And monitoring the Pod change information in the etcd. 31. The iptables on each node is updated by an implementation service abstraction component (kube-proxy) in each node. Wherein, updating the iptables on each node may be adding the route forwarding information corresponding to the IP address of the new Pod.
It can be seen that, in the process of creating a new Pod, the routing forwarding information corresponding to the IP address of the new Pod needs to be added in each node, so that the routing forwarding information of the Pod is stored in each node.
Accordingly, after the creation of a new Pod is completed, it is also necessary to delete an old Pod in the process of deleting the old Pod, as shown in fig. 2D, in order to implement a process schematic diagram of deleting an old Pod in the K8s system shown in fig. 2A in the process of performing deployment rolling upgrade, the process includes the following steps: 211. an application service interface (API Server) receives a request to delete an old Pod triggered by an administrator, which may be an administrator of the Server cluster. 221. The API Server changes the state of the old Pod in the database (etcd) to the Terminating state (Terminating). 231. The API Server adds the old Pod to the scheduler's queue. 241. The node Scheduler determines the working node on which the old Pod is running. 251. And informs a node management module (kubelet) on the determined working node that the old Pod needs to be deleted, the node management module on the working node triggers deletion of the old Pod, triggering asynchronous operations in the CNI, CSI, and CRI. 261. The CNI is the route forwarding information corresponding to the IP address of the deleted old Pod. 271. The CSI deletes the stored data of the old Pod. 281. The old Pod is deleted in the CRI. In the step 261, the route forwarding information corresponding to the IP address of the old Pod in each node in the server group needs to be deleted, and the specific deletion operation may be implemented by an implementation service abstraction component (kube-proxy) in each node deleting the route forwarding information corresponding to the IP address of the old Pod stored in the iptables on each node.
The route forwarding information of the old Pod is also stored in each node, and the storage process may refer to the description in the process of creating a new Pod. During the deletion of the old Pod, there will be two asynchronous operations: one operation is to delete the route forwarding information of the old Pod, and the other operation is to delete the old Pod from the node where the old Pod is located. Since each node has the route forwarding information of the old Pod, when deleting the route forwarding information of the old Pod, the kube-proxy needs to delete the route forwarding information corresponding to the IP address of the old Pod from the iptables of each node, which may take a while. During this period, if the asynchronous operation of deleting the old Pod from the node where the old Pod is located has been executed, for the case that the route forwarding information corresponding to the IP address of the old Pod is not deleted in some nodes, the some nodes may continue to forward the traffic to the old Pod according to the route forwarding information corresponding to the IP address of the old Pod, and thus a problem of connection timeout may occur.
In order to solve the above problem, the present disclosure provides a container group update apparatus and a container group update method, in a container group update scenario, a first container group (new Pod) for replacing a second container group (old Pod) running in a second node is created on a first node, and when a route forwarding information of the second container group is triggered to be deleted from each node in a server cluster, since load balancing information of a group where the second container group is located may be obtained and a traffic distribution state of the second container group may be determined therefrom, a case that a traffic is distributed to the second container group may be determined, when a traffic distribution state represents that the traffic is no longer distributed to the second container group, it indicates that at this time, the route forwarding information of the second container group has been deleted from each node, and at this time, the second container group is deleted, a case that network traffic is forwarded to the deleted old second container group does not occur any more, the problem of connection timeout can not exist, so that the performance of a system K8s in the process of executing the Deployment rolling upgrade is improved, and the rolling upgrade which is not sensed by a user is realized.
As shown in fig. 3, a hardware configuration block diagram of a container group update apparatus 300 provided for an embodiment of the present disclosure, where the container group update apparatus 300 shown in fig. 3 includes: communicator 310, controller 320, memory 330, user input interface 340, and power supply 350, among others.
Among other things, communicator 310 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator 310 may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver.
The controller 320 includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphic Processing Unit (GPU), a ramscandomaccessmemory, a RAM, a ROM (Read-only memory), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
The Memory 330 includes volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read-Only Memory (ROM) or flash Memory (flash RAM). The memory is an example of a computer-readable medium. Computer readable media includes both permanent and non-permanent, removable and non-removable storage media. Storage media may implement information storage by any method or technology, and the information may be computer-readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
A user input interface 340 for receiving user input commands. The user may refer to an administrator of the server cluster for the container group update device.
In some embodiments, a controller 320 to: creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are different version container groups aiming at target service;
after the first container group is created, generating container group change information to trigger deletion of routing forwarding information of a second container group from each node in the server cluster, and obtaining load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is a container group running on a second node, and the first node and the second node are any two same or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
deleting the second container group when the flow distribution status indicates that no flow is distributed to the second container group.
In some embodiments, the load balancing information includes traffic allocation information corresponding to IP addresses of the first container group and the second container group, respectively;
the controller 320 is specifically configured to, when the first container group is created on the first node, allocate a first IP address to the first container group as the routing forwarding information of the first container group, and store the routing forwarding information of the first container group on each node in the server cluster, so that traffic corresponding to the target service is forwarded to the first container group according to the first IP address; after the first container group is created, generating container group change information to trigger deletion of a second IP address of the second container group from each node in the server cluster, and acquiring a traffic distribution state corresponding to the second IP address from the load balancing information.
In some embodiments, the controller 320 is specifically configured to, when receiving the indication message for deleting the second container group, obtain load balancing information of a group to which the second container group belongs.
In some embodiments, the second node is a working node in the server cluster, and the container group update apparatus 300 further includes:
a communicator 310 to: realizing information interaction between the control node and the working node;
the controller 320 is specifically configured to: sending the indication message from the control node to the second node through the communicator 310; and controlling the second node to respond to the indication message and acquire the load balancing information of the group to which the second container group belongs.
In some embodiments, the second node is a control node in the server cluster, and the container group update apparatus further includes:
a user input interface 340 for: receiving the indication message input by a user;
the controller 320 is specifically configured to, in response to the indication message, obtain load balancing information of a group to which the second container group belongs.
In some embodiments, the controller 320 is specifically configured to:
when the traffic distribution state representation of the second container group obtained last time still distributes traffic to the second container group, obtaining the load balancing information of the group to which the second container group belongs again, so that the traffic distribution state of the second container group is obtained based on the load balancing information.
In some embodiments, the controller 320 is specifically configured to:
when the traffic is still distributed to the second container group according to the traffic distribution state representation of the second container group obtained last time, waiting for a first time period, obtaining the load balancing information of the group to which the second container group belongs again after the first time period, and obtaining the traffic distribution state of the second container group based on the load balancing information.
In some embodiments, the controller 320 is specifically configured to:
continuously and repeatedly acquiring load balancing information of the group to which the second container group belongs;
determining the flow distribution states of a plurality of second container groups according to the load balancing information obtained continuously for a plurality of times;
and deleting the second container group when the flow distribution states of the plurality of second container groups all represent that the flow is still distributed to the second container group.
For more detailed description of the present solution, the following description is provided by way of example in conjunction with the accompanying drawings, and it is understood that the steps involved in the following drawings may include more steps or fewer steps in actual implementation, and the order between the steps may be different, so as to enable the container group updating method provided in the embodiments of the present disclosure to be implemented.
The container group updating method provided by the embodiment of the present disclosure may be implemented by a container group updating device, or implemented by a part of functional modules or a part of functional entities on the container group updating device.
As shown in fig. 4, a schematic flow chart of a container group updating method provided in an embodiment of the present disclosure may include the following steps 401 to 406:
401. a first container group is created on a first node of a server cluster.
Wherein the server cluster includes, but is not limited to, a first node and a second node. The second node has an existing second group of containers running therein. The first node and the second node may be physical machines in the server, or may be virtual machines running on the physical machines.
The first container group (hereinafter also referred to as a first Pod) and the second container group (hereinafter also referred to as a second Pod) are different version container groups for the target service.
In some embodiments, when the first container group is created on the first node, the first IP address is allocated to the first container group as the routing and forwarding information of the first container group, and the routing and forwarding information of the first container group is saved on each node in the server cluster, so that the traffic corresponding to the target service is forwarded to the first container group according to the first IP address.
It should be noted that, the above description of creating the first Pod on the first node of the server cluster may refer to the description of creating a new Pod in fig. 2C, and is not described herein again.
402. After the first container group creation is completed, container group change information is generated to trigger deletion of route forwarding information for the second container group from each node in the server cluster.
In some embodiments, after the first container group is created, container group change information is generated to trigger deletion of a second IP address of a second container group from each node in the server cluster, and a traffic distribution state corresponding to the second IP address is obtained from the load balancing information.
After the first Pod is created on the first node, the kubel on the first node stores the route forwarding information of the first Pod into the etcd, the etcd generates container group change information, and after the container group change information of the first Pod is monitored, the route forwarding information of the first Pod is deleted from each node in the server cluster. Specifically, the kube-proxy of each node may delete the route forwarding information corresponding to the IP address of the old Pod from the iptables of each node.
403. And acquiring the load balancing information of the group to which the second container group belongs.
The group comprises a first container set and a second container set. The load balancing information includes traffic distribution information corresponding to the IP addresses of the first container group and the second container group, respectively.
In some embodiments, the group may further include other container groups except the first container group and the second container group, and load balancing may be achieved by scheduling traffic distribution conditions of different container groups in the group.
The second container group is a container group running on the second node, and the first node and the second node are any two same or different nodes in the server cluster.
The load balancing information may be that the kube-proxy on the second node first obtains the route forwarding information of each Pod in the iptables on the second node, then determines the route forwarding information of the Pod corresponding to the same service according to the route forwarding information of each Pod, and finally performs load balancing based on the flow distribution of the route forwarding information of the Pod corresponding to the same service, so as to obtain the load balancing information.
In some embodiments, after 402, the load balancing information of the group to which the second container group belongs may be obtained in case of receiving an indication message to delete the second container group.
In some embodiments, the obtaining of the load balancing information of the second group of containers may be: acquiring a flow distribution state corresponding to the second IP address from the load balancing information; wherein the second IP address is an IP address of the second container group.
Illustratively, as shown in table 1, a schematic table of a container group, an IP address and a corresponding traffic distribution status in the load balancing information in the group is shown. As shown in table 1, each of the plurality of container groups corresponds to an IP address, and each IP address corresponds to a traffic allocation status (referred to as a target status).
TABLE 1
Pod IP address target state
Pod1 IP address 1 End of flow state
Pod2 IP address 2 Non-end of flow state
Pod3 IP address 3 Non-flow end state
Pod4 IP address 4 Non-end of flow state
As shown in table 1, it can be seen that the group includes 4 Pod, where a traffic distribution state corresponding to Pod1 is a traffic end state, and at this time, Pod1 has no network traffic flowing into any more, which indicates that the route forwarding information of Pod1 in iptables of each node in the server cluster has been deleted; correspondingly, the traffic distribution states corresponding to Pod2, Pod3, and Pod4 are non-traffic end states, and at this time, network traffic still flows in Pod2, Pod3, and Pod4, which indicates that the route forwarding information of Pod2, Pod3, and Pod4 in iptables of each node in the server cluster is not deleted, and load balancing among Pod2, Pod3, and Pod4 may also be continued.
The container group updating method provided in the embodiment of the present disclosure may be applied to a working node or a control node in a K8s cluster, that is, the second Pod may be a Pod operating on the working node, and the second Pod may also be a Pod operating on the control node.
Exemplarily, in combination with the architecture of the K8s cluster shown in fig. 2A, as shown in fig. 5, a schematic flow diagram for deleting the second Pod is provided for the embodiment of the present disclosure. In fig. 5, when the second Pod is deleted during the service rolling update process in the K8s cluster, the API service (API Server) will first receive an instruction message (which may be a command) triggered by the administrator to delete the second Pod (i.e. the second container group), after receiving the indication message of deleting the second Pod, synchronously changing the state of the second Pod in the etcd into a Terminating state (Terminating), adding the second Pod to a queue of a Scheduler by a node Scheduler, the API Server may send the above-mentioned indication message for deleting the second Pod to a node management module (kubel) of a node where the second Pod is located, after the kubel receives the indication message for deleting the second Pod, and executing asynchronous operations, wherein the asynchronous operations can include an operation of deleting the second Pod by a Container Runtime Interface (CRI), an operation of deleting the route forwarding information of the second Pod by a Container Network Interface (CNI), and an operation of deleting the storage information of the second Pod by a Container Storage Interface (CSI).
The second node is a working node in the server cluster, the second Pod is a Pod running on the working node, the control node can receive an indication message input by a user, and the control node responds to the indication message to acquire load balancing information of a group to which the second container group belongs. That is, the whole workflow shown in fig. 5 can be completed in the control node. It should be noted that, besides the modules shown in fig. 2A, the control node may also include related modules in the working node to implement the whole workflow shown in fig. 5.
The second node is a control node in the server cluster, the second Pod is a Pod running on the control node, when the second container group is deleted, the working node may receive an indication message sent by the control node, and the working node responds to the indication message to acquire load balancing information of a group to which the second container group belongs. That is, the whole workflow shown in fig. 5 needs to be completed by the control node and the work node together.
For example, as shown in fig. 6, for another schematic flow diagram of deleting a second Pod provided by the embodiment of the present disclosure, in fig. 6, when the second Pod is deleted in a service rolling update process in a K8s cluster, an API service (API Server) in a control node receives an instruction message (which may be an instruction) triggered by an administrator to delete the second Pod (i.e., the second container group), and after receiving the instruction message to delete the second Pod, the API service synchronously changes a state of the second Pod in an etcd of the control node into a Terminating state (Terminating), and the second Pod is added to a queue of a Scheduler by a node schedule (Scheduler) in the control node, and the API Server may send the instruction message to delete the second Pod to a node management module (kubel) of a working node where the second Pod is located, and after receiving the instruction message to delete the second Pod, perform an asynchronous operation, where the API operation may include an operation of deleting the second Pod by a Container Runtime Interface (CRI), and the Container Network Interface (CNI) deletes the routing forwarding information of the second Pod, and the Container Storage Interface (CSI) deletes the stored information of the second Pod.
404. And determining the flow distribution state of the second container group according to the load balancing information.
For example, the traffic distribution status of the second container group may be determined from the load balancing information as shown in table 1.
405. It is determined whether the flow allocation status of the second group of containers indicates that flow is no longer allocated to the second group of containers.
In case it is determined that the flow allocation status of the second container group indicates that the flow is no longer allocated to the second container group, e.g. the flow allocation status is a flow end status, the following 406 is performed, and the second container group is deleted; and under the condition that the flow distribution state representation of the second container group still distributes the flow to the second container group, returning to execute the step 402, and acquiring the load balancing information of the group to which the second container group belongs again.
In some embodiments, when the traffic is still allocated to the second container group according to the traffic allocation state representation of the second container group obtained last time, the load balancing information of the group to which the second container group belongs is obtained again, so that the traffic allocation state of the second container group is obtained based on the load balancing information.
In some embodiments, when the traffic is still allocated to the second container group according to the traffic allocation status representation of the second container group obtained last time, waiting for the first time period, obtaining the load balancing information of the group to which the second container group belongs again after the first time period, and obtaining the traffic allocation status of the second container group based on the load balancing information.
For example, the interval duration for acquiring the load balancing information may be preset as the first duration, that is, the load balancing information of the group to which the second container group belongs may be acquired once at every interval of the first duration.
In the above embodiment, when the obtained representation of the traffic distribution state of the second container group still distributes traffic to the second container group, it indicates that some routing forwarding information corresponding to the second container group may still exist in the current server cluster, at this time, after waiting for the first duration, load balancing information of a group to which the second container group belongs may be obtained again, the traffic distribution state of the second container group is obtained based on the load balancing information, and the judgment is performed again until the representation of the traffic distribution state of the second container group no longer distributes traffic to the second container group, it is determined that all nodes in the server cluster delete the routing forwarding information corresponding to the second container group, and at this time, the second container group may be deleted.
406. The second group of containers is deleted.
As can be seen from the flow of deleting the second Pod shown in fig. 5 and fig. 6, after the request for deleting the second Pod by the administrator reaches the API Server, the kubelet receives the notification of deleting the second Pod, and the kubelet starts two asynchronous actions, one is responsible for deleting the route forwarding information of the second Pod, and the other is responsible for deleting the second Pod.
(1) Deleting the route forwarding information of the second Pod:
the Endpoint (Endpoint) Controller (Controller) listens for events of the API Server and then deletes the second Pod from the corresponding Endpoint. Where Endpoint is used to connect services and Pod, Endpoint is a list of IP addresses and ports (ports) of a service.
Further, after the Endpoint Controller completes processing, the request is sent to API Server, kube-proxy, and Core Domain Name System (Core Domain Name System, Core DNS) will monitor the event, kube-proxy will update iptables on the node, and Core DNS will update Domain Name System (Domain Name System, abbreviation: DNS).
(2) Deleting the second Pod process:
the Kubelet hears the API Server event and the Kubelet starts sending a termination signal (SIGTERM signal) to the process (application) running at the second Pod.
The two workflows are asynchronous, the two operations are independent, the two workflows work simultaneously after a deleting instruction is received, if the second Pod is deleted, the routing forwarding information of the second Pod is not deleted from all nodes in the server cluster, the flow of the second Pod is forwarded to the deleted Pod, connection timeout is caused, a user is suddenly disconnected with connected services, a new user cannot be connected with the services, and the user use and experience are poor.
In the related art, K8s provides two hooks (hooks) for a Pod in its lifecycle, which are two events after a container is started (postStart) and before the container is terminated (preStop), and in order to ensure that the route forwarding information of a second Pod is deleted before the second Pod is deleted, an implementation manner of deleting the second Pod is provided in the related art, where the implementation manner of deleting the second Pod sets a waiting time.
Exemplarily, as shown in fig. 7, a flowchart of a method for deleting a second Pod by setting a waiting time provided in an embodiment of the present disclosure is shown, where the flowchart may include the following steps:
701. an indication message to delete the second Pod is received.
Before the node receives the indication message of deleting the second Pod, the following 702 and 703 are performed.
702. A container before termination hook (PreStop hook) is executed, setting the wait time.
703. The second Pod is deleted after the waiting time.
And when the waiting time is set, forwarding information to the route of the second Pod on each node in the deleted server cluster for reserving time. For example, the waiting time may be set to 5s to 10 s.
Since the two operations of deleting the route forwarding information of the second Pod on each node in the server cluster and deleting the second Pod are processed asynchronously, when the above solution is used, it is necessary to ensure that the route forwarding information of the second Pod on all nodes in the server cluster is deleted before the second Pod is deleted.
For example, as shown in fig. 8, in a schematic execution timing diagram of two asynchronous operations provided for the embodiment of the present disclosure, it is assumed that an indication message for executing deletion of the second Pod is received at time T1, at which point the execution container hooks PreStop hook before terminating, the waiting time is set to 10s, the operation for deleting the second Pod is executed at time T3, and if the operation for deleting the route forwarding information of the second Pod on each node in the server cluster is already executed at time T2 shown in fig. 8, it may be ensured that the route forwarding information of the second Pod on all nodes in the server cluster is deleted before deleting the second Pod.
Illustratively, as shown in fig. 9, a schematic diagram of an execution timing sequence of another two asynchronous operations provided for the embodiment of the present disclosure; assuming that an instruction message for executing deletion of the second Pod is received at time T1, at this time, the hook PreStop hook before the execution container is terminated is set to have a waiting time of 10s, and the operation of deleting the second Pod is executed at time T3, if the operation of deleting the route forwarding information of the second Pod is executed at time T4 shown in fig. 9, it cannot be guaranteed that the route forwarding information of the second Pod is deleted from all the nodes in the server cluster before the second Pod is deleted.
In the related art, the above solution has the following drawbacks:
1. the waiting time is not regular and can only be set according to experience;
2. the whole online service needs to be updated every time the waiting time is adjusted, so that the influence range is large;
3. as the number of servers in the K8s cluster increases, the waiting time needs to be increased, and therefore the waiting time needs to be adjusted at any time;
4. originally, the situation that the second Pod is deleted after 10s does not need to wait for 10s is also needed;
for example, as shown in fig. 8, the route forwarding information of the second Pod has been deleted after 2s after time T1, but it still waits until time T3 after 10s to delete the second Pod.
5. The rolling update time increases.
In the embodiment of the present disclosure, in order to achieve a better effect, the traffic distribution status of the second Pod is confirmed by obtaining the load balancing information of the second Pod group by the methods 401 to 405.
Illustratively, PreStop hook may be performed as well before the Pod stops, but instead of setting the latency, 1 sidecar container group (sidecar Pod) is enabled to guarantee the sequentiality of two asynchronous operations. As shown in fig. 10, a flowchart of a method for deleting a second Pod by using load balancing information according to an embodiment of the present disclosure may include the following steps 1001 to 1005:
1001. an indication message to delete the second Pod is received.
1002. Prestop hook is executed, enabling the sidecar Pod target script.
Wherein the target script is set to acquire the current IP flow distribution state (target state) from the load balancing information of the second Pod group (target groups) every few seconds
Where each target group in the K8s cluster is used to route requests to one or more registered targets. When each listener rule is created, a target group and condition are specified. When the rule condition is satisfied, the traffic will be forwarded to the corresponding target group. Different target groups may be created for different types of requests.
1003. And acquiring the current IP flow distribution state from the load balance information of the group to which the second Pod belongs every few seconds by executing the target script.
1004. Whether the flow allocation status no longer allocates flow to the second group of containers.
If the flow allocation state is "draining", the flow is no longer allocated to the second container group, the execution target script is exited, and the following is executed 905.
And if the traffic distribution state is not in the drawing, continuing to detect, returning to execute 903, and acquiring the traffic distribution state of the IP address of the current second Pod from the load balancing information of the group to which the second Pod belongs.
The traffic distribution state is called, which indicates that the route forwarding information of the second Pod is deleted and there is no traffic inflow.
1005. The second Pod is deleted.
The beneficial effects of the embodiments in the present disclosure may include:
1. waiting time does not need to be configured, and the second Pod can be deleted after the second Pod is determined to have no forwarding flow according to the flow forwarding condition of actual load balance;
2. only one-time on-line configuration needs to be changed;
3. the influence of the expansion of the K8s cluster range is avoided;
4. the rolling update time is short.
The container group updating method provided by the embodiment of the disclosure can acquire load balancing information of a group to which a second container group belongs, wherein the group comprises a plurality of container groups, and the second container group is a container group running on a node; determining the flow distribution state of the second container group according to the load balancing information; and deleting the second container group under the condition that the flow distribution state of the second container group represents that the flow is not distributed to the second container group any more. According to the scheme, the traffic distribution state of the second container group can be determined according to the load balancing information, and under the condition that the traffic distribution state representation of the second container group does not distribute traffic to the second container group any more, it is known that all nodes in the server cluster delete the route forwarding information of the second container group, and then delete the old Pod, so that the situation that network traffic is forwarded to the deleted old Pod is avoided, and the problem of connection timeout does not exist, so that the performance of executing the depolyment rolling upgrade process in the K8s system is improved, and the rolling upgrade without perception of a user is realized.
In some embodiments, in the process of executing 1001 to 1005, the load balancing information of the group to which the second container group belongs may also be obtained continuously for multiple times; determining the flow distribution states of a plurality of second container groups according to the load balancing information obtained continuously for a plurality of times; and deleting the second container group when the flow distribution states of the plurality of second container groups all represent that the flow is still distributed to the second container group.
In consideration of the fact that misjudgment possibly exists in the flow distribution state of the second container group according to the load balancing information acquired once in the actual implementation process, the flow distribution states of the plurality of second container groups are determined by acquiring the load balancing information of the group to which the second container group belongs continuously for a plurality of times and according to the load balancing information acquired continuously for a plurality of times; and deleting the second container group when the flow distribution states of the plurality of second container groups are all flow ending states. The method can improve the accuracy of judgment.
The embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process executed by the container group updating method, and can achieve the same technical effect, and in order to avoid repetition, the computer program is not described herein again.
The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The present disclosure provides a computer program product including a computer program which, when run on a computer, causes the computer to implement the container group update method for a container group described above.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the foregoing discussion in some embodiments is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A container group update apparatus, characterized in that the container group update apparatus comprises:
a controller configured to: creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are different version container groups aiming at target services;
after the first container group is created, generating container group change information to trigger deletion of routing forwarding information of a second container group from each node in the server cluster, and obtaining load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is a container group running on a second node, and the first node and the second node are any two same or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state representation does not distribute the flow to the second container group any more.
2. The apparatus according to claim 1, wherein the load balancing information includes traffic allocation information corresponding to IP addresses of the first container group and the second container group, respectively;
the controller is specifically configured to:
when the first container group is created on the first node, allocating a first IP address to the first container group as the routing forwarding information of the first container group, and storing the routing forwarding information of the first container group on each node in the server cluster, so that the traffic corresponding to the target service is forwarded to the first container group according to the first IP address;
after the first container group is created, generating container group change information to trigger deletion of a second IP address of the second container group from each node in the server cluster, and acquiring a traffic distribution state corresponding to the second IP address from the load balancing information.
3. The container group update apparatus according to claim 1, wherein the controller is specifically configured to:
and under the condition of receiving the indication message of deleting the second container group, acquiring the load balancing information of the group to which the second container group belongs.
4. The container group update apparatus according to claim 3, wherein the second node is a worker node in the server cluster, the apparatus further comprising:
a communicator configured to: realizing information interaction between the control node and the working node;
the controller is specifically configured to: sending, by the communicator, the indication message from the control node to the second node; and controlling the second node to respond to the indication message and acquire the load balancing information of the group to which the second container group belongs.
5. The container group update apparatus according to claim 3, wherein the second node is a control node in the server cluster, the apparatus further comprising:
a user input interface configured to: receiving the indication message input by a user;
the controller is specifically configured to, in response to the indication message, acquire load balancing information of a group to which the second container group belongs.
6. The container group update apparatus according to claim 1, wherein the controller is specifically configured to:
when the traffic distribution state representation of the second container group obtained last time still distributes traffic to the second container group, obtaining the load balancing information of the group to which the second container group belongs again, so that the traffic distribution state of the second container group is obtained based on the load balancing information.
7. The container group update apparatus according to claim 6, wherein the controller is specifically configured to:
when the traffic is still distributed to the second container group according to the traffic distribution state representation of the second container group obtained last time, waiting for a first time period, obtaining the load balancing information of the group to which the second container group belongs again after the first time period, and obtaining the traffic distribution state of the second container group based on the load balancing information.
8. The container group update apparatus according to claim 1, wherein the controller is specifically configured to:
continuously and repeatedly acquiring load balancing information of the group to which the second container group belongs;
determining the flow distribution states of the second container groups according to the load balancing information obtained continuously for multiple times;
and deleting the second container group when the flow distribution states of the plurality of second container groups all represent that the flow is still distributed to the second container group.
9. A container set renewal process comprising:
creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are different version container groups aiming at target services;
after the first container group is created, generating container group change information to trigger deletion of routing forwarding information of a second container group from each node in the server cluster, and obtaining load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is a container group running on a second node, and the first node and the second node are any two same or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
deleting the second container group when the flow distribution status indicates that no flow is distributed to the second container group.
10. The method according to claim 9, wherein the load balancing information includes traffic allocation information corresponding to the IP addresses of the first container group and the second container group, respectively;
the method further comprises the following steps:
when the first container group is created on the first node, allocating a first IP address to the first container group as the routing forwarding information of the first container group, and storing the routing forwarding information of the first container group on each node in the server cluster, so that the traffic corresponding to the target service is forwarded to the first container group according to the first IP address;
the generating of the container group change information to trigger deletion of the route forwarding information of the second container group from each node in the server cluster comprises:
generating container group change information to trigger deletion of the second IP address of the second container group from each node in the server cluster;
the obtaining the traffic distribution state of the second container group from the load balancing information includes:
and acquiring the flow distribution state corresponding to the second IP address from the load balancing information.
CN202210528978.XA 2022-05-16 2022-05-16 Container group updating equipment and container group updating method Active CN114938375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210528978.XA CN114938375B (en) 2022-05-16 2022-05-16 Container group updating equipment and container group updating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210528978.XA CN114938375B (en) 2022-05-16 2022-05-16 Container group updating equipment and container group updating method

Publications (2)

Publication Number Publication Date
CN114938375A true CN114938375A (en) 2022-08-23
CN114938375B CN114938375B (en) 2023-06-02

Family

ID=82865769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210528978.XA Active CN114938375B (en) 2022-05-16 2022-05-16 Container group updating equipment and container group updating method

Country Status (1)

Country Link
CN (1) CN114938375B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109067828A (en) * 2018-06-22 2018-12-21 杭州才云科技有限公司 Based on the more cluster construction methods of Kubernetes and OpenStack container cloud platform, medium, equipment
CN109150608A (en) * 2018-08-22 2019-01-04 苏州思必驰信息科技有限公司 Interface service upgrade method and system for voice dialogue platform
CN110213309A (en) * 2018-03-13 2019-09-06 腾讯科技(深圳)有限公司 A kind of method, equipment and the storage medium of binding relationship management
CN111163189A (en) * 2020-01-07 2020-05-15 上海道客网络科技有限公司 IP monitoring and recycling system and method based on network name space management and control
CN111258609A (en) * 2020-01-19 2020-06-09 北京百度网讯科技有限公司 Upgrading method and device of Kubernetes cluster, electronic equipment and medium
US20210011812A1 (en) * 2019-07-10 2021-01-14 Commvault Systems, Inc. Preparing containerized applications for backup using a backup services container and a backup services container-orchestration pod
US20210072966A1 (en) * 2019-09-05 2021-03-11 International Business Machines Corporation Method and system for service rolling-updating in a container orchestrator system
CN113254165A (en) * 2021-07-09 2021-08-13 易纳购科技(北京)有限公司 Load flow distribution method and device for virtual machine and container, and computer equipment
CN113364727A (en) * 2020-03-05 2021-09-07 北京金山云网络技术有限公司 Container cluster system, container console and server
CN113656168A (en) * 2021-07-16 2021-11-16 新浪网技术(中国)有限公司 Method, system, medium and equipment for automatic disaster recovery and scheduling of traffic
CN113835836A (en) * 2021-09-23 2021-12-24 证通股份有限公司 System, method, computer device and medium for dynamically publishing container service
CN113923257A (en) * 2021-09-22 2022-01-11 北京金山云网络技术有限公司 Container group instance termination and creation method, device, electronic equipment and storage medium
CN113949707A (en) * 2021-09-30 2022-01-18 上海浦东发展银行股份有限公司 OpenResty and K8S-based container cloud service discovery and load balancing method
CN114385349A (en) * 2021-12-06 2022-04-22 阿里巴巴(中国)有限公司 Container group deployment method and device
CN114461303A (en) * 2022-02-10 2022-05-10 京东科技信息技术有限公司 Method and device for accessing cluster internal service

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110213309A (en) * 2018-03-13 2019-09-06 腾讯科技(深圳)有限公司 A kind of method, equipment and the storage medium of binding relationship management
CN109067828A (en) * 2018-06-22 2018-12-21 杭州才云科技有限公司 Based on the more cluster construction methods of Kubernetes and OpenStack container cloud platform, medium, equipment
CN109150608A (en) * 2018-08-22 2019-01-04 苏州思必驰信息科技有限公司 Interface service upgrade method and system for voice dialogue platform
US20210011812A1 (en) * 2019-07-10 2021-01-14 Commvault Systems, Inc. Preparing containerized applications for backup using a backup services container and a backup services container-orchestration pod
US20210072966A1 (en) * 2019-09-05 2021-03-11 International Business Machines Corporation Method and system for service rolling-updating in a container orchestrator system
CN111163189A (en) * 2020-01-07 2020-05-15 上海道客网络科技有限公司 IP monitoring and recycling system and method based on network name space management and control
CN111258609A (en) * 2020-01-19 2020-06-09 北京百度网讯科技有限公司 Upgrading method and device of Kubernetes cluster, electronic equipment and medium
CN113364727A (en) * 2020-03-05 2021-09-07 北京金山云网络技术有限公司 Container cluster system, container console and server
CN113254165A (en) * 2021-07-09 2021-08-13 易纳购科技(北京)有限公司 Load flow distribution method and device for virtual machine and container, and computer equipment
CN113656168A (en) * 2021-07-16 2021-11-16 新浪网技术(中国)有限公司 Method, system, medium and equipment for automatic disaster recovery and scheduling of traffic
CN113923257A (en) * 2021-09-22 2022-01-11 北京金山云网络技术有限公司 Container group instance termination and creation method, device, electronic equipment and storage medium
CN113835836A (en) * 2021-09-23 2021-12-24 证通股份有限公司 System, method, computer device and medium for dynamically publishing container service
CN113949707A (en) * 2021-09-30 2022-01-18 上海浦东发展银行股份有限公司 OpenResty and K8S-based container cloud service discovery and load balancing method
CN114385349A (en) * 2021-12-06 2022-04-22 阿里巴巴(中国)有限公司 Container group deployment method and device
CN114461303A (en) * 2022-02-10 2022-05-10 京东科技信息技术有限公司 Method and device for accessing cluster internal service

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NGUYEN NGUYEN;TAEHONG KIM: "Toward Highly scalable load balancing in Kubernetes clusters", 《IEEE》 *
刘彪: "基于容器的NFV平台关键技术研究与实现", 《中国优秀硕士学位论文全文数据库》 *

Also Published As

Publication number Publication date
CN114938375B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US11677818B2 (en) Multi-cluster ingress
US10701139B2 (en) Life cycle management method and apparatus
US9999030B2 (en) Resource provisioning method
CN111542064B (en) Container arrangement management system and arrangement method for wireless access network
JP6113849B2 (en) Method and apparatus for automatically deploying geographically distributed applications in the cloud
US10778750B2 (en) Server computer management system for supporting highly available virtual desktops of multiple different tenants
CN105095317B (en) Distributed data base service management system
US10129096B2 (en) Commissioning/decommissioning networks in orchestrated or software-defined computing environments
TW201444320A (en) Setup method and system for client and server environment
CN110716787A (en) Container address setting method, apparatus, and computer-readable storage medium
KR20110083084A (en) Apparatus and method for operating server by using virtualization technology
CN113382077B (en) Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium
CN104506654A (en) Cloud computing system and backup method of dynamic host configuration protocol server
CN107809495B (en) Address management method and device
CN109992373B (en) Resource scheduling method, information management method and device and task deployment system
WO2021120633A1 (en) Load balancing method and related device
CN112637265B (en) Equipment management method, device and storage medium
JP6905990B2 (en) Delegation / delegation to a network in an orchestrated computing environment or a software-defined computing environment
CN109067573B (en) Traffic scheduling method and device
CN112532758B (en) Method, device and medium for establishing network edge computing system
JP2016177324A (en) Information processing apparatus, information processing system, information processing method, and program
CN112019362B (en) Data transmission method, device, server, terminal, system and storage medium
CN114938375B (en) Container group updating equipment and container group updating method
CN114827177B (en) Deployment method and device of distributed file system and electronic equipment
CN114615268B (en) Service network, monitoring node, container node and equipment based on Kubernetes cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant