CN114938375B - Container group updating equipment and container group updating method - Google Patents

Container group updating equipment and container group updating method Download PDF

Info

Publication number
CN114938375B
CN114938375B CN202210528978.XA CN202210528978A CN114938375B CN 114938375 B CN114938375 B CN 114938375B CN 202210528978 A CN202210528978 A CN 202210528978A CN 114938375 B CN114938375 B CN 114938375B
Authority
CN
China
Prior art keywords
container group
node
group
container
load balancing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210528978.XA
Other languages
Chinese (zh)
Other versions
CN114938375A (en
Inventor
杨彦存
赵贝
矫恒浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202210528978.XA priority Critical patent/CN114938375B/en
Publication of CN114938375A publication Critical patent/CN114938375A/en
Application granted granted Critical
Publication of CN114938375B publication Critical patent/CN114938375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Stored Programmes (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure relates to a container group updating device and a container group updating method, and relates to the technical field of Internet. The container group updating apparatus includes: a controller configured to: creating a first container group on a first node, wherein the first container group and a second container group existing in a server cluster are container groups with different versions aiming at a target service; after the first container group is established, generating container group change information to trigger deleting route forwarding information of a second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, and the second container group is one container group running on the second node; acquiring the flow distribution state of the second container group from the load balancing information; and deleting the second container group when the flow distribution state characterizes that the flow is no longer distributed to the second container group.

Description

Container group updating equipment and container group updating method
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a container group updating device and a container group updating method.
Background
The Kubernetes system is abbreviated as a K8s system, the K8s system is an open-source container orchestration engine, which supports automatic Deployment and large-scale scalable application containerization management, the K8s system is used for managing containerized applications on multiple hosts in a cloud platform, when a Deployment (depoyment) rolling upgrade process is performed in the K8s system, a new container group (Pod) is created first, then an old Pod is deleted, and two asynchronous operations exist in the process of deleting the old Pod: one is to delete the route forwarding information of the old Pod and the other is to delete the old Pod. Because the two operations are performed asynchronously, the sequence of the two asynchronous operations cannot be controlled, so that the situation that the old Pod is deleted after the route forwarding information of the old Pod is deleted is possible, and when the situation occurs, the situation that network traffic is forwarded to the deleted old Pod occurs, and the problem of connection timeout occurs.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a container group update apparatus and a container group update method, which can ensure that the old Pod is deleted after the route forwarding information of the old Pod is deleted.
In order to achieve the above object, the technical solution provided by the embodiments of the present disclosure is as follows:
in a first aspect, there is provided a container group updating apparatus comprising:
a controller configured to: creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are container groups with different versions aiming at target services;
after the first container group is created, generating container group change information to trigger deleting route forwarding information of a second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is one container group running on a second node, and the first node and the second node are any two identical or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state indicates that the flow is no longer distributed to the second container group.
As an optional implementation manner of the embodiment of the present disclosure, the load balancing information includes traffic allocation information corresponding to IP addresses of the first container group and the second container group respectively;
the controller is specifically configured to:
when the first container group is created on the first node, a first IP address is allocated to the first container group as routing forwarding information of the first container group, and the routing forwarding information of the first container group is stored on each node in the server cluster, so that traffic corresponding to the target service is forwarded to the first container group according to the first IP address;
after the first container group is created, container group change information is generated to trigger deleting a second IP address of the second container group from each node in the server cluster, and a flow distribution state corresponding to the second IP address is obtained from the load balancing information.
As an alternative implementation of the embodiments of the present disclosure, the controller is specifically configured to:
and under the condition that the indication message for deleting the second container group is received, acquiring the load balancing information of the group to which the second container group belongs.
As an optional implementation manner of the embodiment of the present disclosure, the second node is a working node in the server cluster, and the container group updating device further includes:
a communicator configured to: information interaction between the control node and the working node is realized;
the controller is specifically configured to: transmitting the indication message from the control node to the second node by the communicator; and controlling the second node to respond to the indication message and acquire the load balancing information of the group to which the second container group belongs.
As an optional implementation manner of the embodiment of the present disclosure, the second node is a control node in the server cluster, and the container group updating device further includes:
a user input interface configured to: receiving the indication message input by a user;
the controller is specifically configured to respond to the indication message and obtain load balancing information of the group to which the second container group belongs.
As an alternative implementation of the embodiments of the present disclosure, the controller is specifically configured to:
and when the flow distribution state of the second container group obtained last time represents that the flow is still distributed to the second container group, obtaining the load balancing information of the group to which the second container group belongs again, and obtaining the flow distribution state of the second container group based on the load balancing information.
As an alternative implementation of the embodiments of the present disclosure, the controller is specifically configured to:
and waiting for a first time period when the flow is still distributed to the second container group according to the flow distribution state representation of the second container group acquired last time, acquiring load balancing information of the group to which the second container group belongs again after the first time period, and acquiring the flow distribution state of the second container group based on the load balancing information.
As an alternative implementation of the embodiments of the present disclosure, the controller is specifically configured to:
continuously obtaining the load balancing information of the group to which the second container group belongs for multiple times;
determining flow distribution states of a plurality of second container groups according to load balancing information acquired continuously and repeatedly;
and deleting the second container group when the flow distribution states of the plurality of the second container groups are all characterized in that the flow is still distributed to the second container group.
In a second aspect, there is provided a container group updating method, including:
creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are container groups with different versions aiming at target services;
After the first container group is created, generating container group change information to trigger deleting route forwarding information of a second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is one container group running on a second node, and the first node and the second node are any two identical or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state indicates that the flow is no longer distributed to the second container group.
As an optional implementation manner of the embodiment of the present disclosure, the load balancing information includes traffic allocation information corresponding to IP addresses of the first container group and the second container group respectively;
the method further comprises the steps of:
when the first container group is created on the first node, a first IP address is allocated to the first container group as routing forwarding information of the first container group, and the routing forwarding information of the first container group is stored on each node in the server cluster, so that traffic corresponding to the target service is forwarded to the first container group according to the first IP address;
The generating container group change information to trigger deleting the routing forwarding information of the second container group from each node in the server cluster includes:
generating container group change information to trigger deletion of a second IP address of the second container group from each node in the server cluster;
the obtaining the flow distribution state of the second container group from the load balancing information includes:
and acquiring a flow distribution state corresponding to the second IP address from the load balancing information.
In a third aspect, the present disclosure provides a computer-readable storage medium comprising: the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements a container group updating method as shown in the second aspect.
In a fourth aspect, the present disclosure provides a computer program product comprising a computer program which, when run on a computer, causes the computer to implement the container group updating method as shown in the second aspect.
The embodiment of the disclosure provides a container group updating device and a container group updating method, wherein the container group updating device comprises: a controller configured to: creating a first container group on a first node of the server cluster, wherein the first container group and a second container group existing in the server cluster are container groups with different versions aiming at a target service; after the first container group is established, generating container group change information to trigger deleting route forwarding information of a second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is one container group running on the second node, and the first node and the second node are any two identical or different nodes in the server cluster; acquiring the flow distribution state of the second container group from the load balancing information; and deleting the second container group when the flow distribution state characterizes that the flow is no longer distributed to the second container group. According to the scheme, in a container group updating scene, a second container group used for replacing the second container group is created, when the route forwarding information of the second container group is triggered to be deleted from each node in the server cluster, as the load balancing information of the group where the second container group is located can be obtained and the flow distribution state of the second container group is determined, the condition of distributing the flow to the second container group can be judged, when the flow distribution state characterizes that the flow is not distributed to the second container group any more, the condition that the route forwarding information of the second container group is deleted from each node at the moment, and when the second container group is deleted again, the condition that the network flow is forwarded to the deleted old second container group is not generated any more, and the problem of connection timeout does not exist, so that the performance in the process of executing the map rolling upgrade in the K8s system is improved, and the rolling upgrade without perception of users is realized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is an exemplary diagram of a run container group in a node provided by an embodiment of the present disclosure;
FIG. 2A is a schematic diagram of a K8s cluster architecture according to an embodiment of the disclosure;
FIG. 2B is a schematic diagram of a relationship between a control node and a working node in a K8s cluster according to an embodiment of the disclosure;
FIG. 2C is a schematic flow chart of creating a new Pod in the K8s system shown in FIG. 2A during execution of a deployment rolling upgrade according to an embodiment of the present disclosure;
FIG. 2D is a schematic flow chart illustrating the deletion of an old Pod in the K8s system shown in FIG. 2A during the execution of the deployment scroll upgrade according to the embodiment of the present disclosure;
Fig. 3 is a hardware configuration block diagram of a container group update apparatus 300 provided in an embodiment of the present disclosure;
fig. 4 is a flow chart of a method for updating a container group according to an embodiment of the disclosure;
fig. 5 is a schematic flow chart of deleting a first Pod according to an embodiment of the disclosure;
fig. 6 is a schematic flow chart of another process for deleting a first Pod according to an embodiment of the disclosure;
FIG. 7 is a flow chart of a method for deleting a first Pod by setting a wait time provided in an embodiment of the disclosure;
FIG. 8 is a timing diagram of two asynchronous operations according to an embodiment of the present disclosure;
FIG. 9 is a timing diagram illustrating the execution of two asynchronous operations according to another embodiment of the present disclosure;
fig. 10 is a flowchart of a method for deleting a first Pod through load balancing information according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
For a better understanding of the solutions provided in the embodiments of the present disclosure, the following description is made with respect to some related technologies related to the embodiments of the present disclosure:
container group (Pod): is the smallest deployable unit in the K8s system. The container group represents an independent running instance in the K8s system, which may consist of a single container or several containers coupled together. The K8s system can comprise a plurality of nodes, and each node can run one container group or a plurality of container groups. Illustratively, as shown in fig. 1, an exemplary diagram of a container group running in one node provided in an embodiment of the present disclosure, two container groups, namely, a container group a and a container group B running in the node in fig. 1, where the container group a includes a container 1 and a container 2, and the container group B includes a container 3 and a container 4.
K8s System: is an open-source container orchestration engine, which supports automated deployment, large-scale scalable application containerization management, the K8s system is used for managing containerized applications on multiple hosts in the cloud platform, the K8s system can be applied to a server cluster, the server cluster can comprise multiple servers, and the server cluster to which the K8s system is applied is also called a K8s cluster, wherein each server can be used as a node.
As shown in fig. 2A, an architecture diagram of a K8s cluster according to an embodiment of the disclosure may be seen, where the K8s cluster includes a control Node (Master Node), a plurality of working nodes (Woker Node), and an Overlay Network (Overlay Network).
The Overlay Network is in the technical field of networks, and refers to a virtual technology mode superimposed on a Network architecture, and the Overlay Network general framework is used for realizing the load applied to a Network without modifying a basic Network in a large scale, can be separated from other Network services, and is mainly based on the basic Network technology of an internet protocol (Internet Protocol, IP). An Overlay Network is a virtual Network built on top of an existing physical Network, with upper layer applications only relating to the virtual Network. The Overlay Network is mainly composed of three parts:
edge device: the device is directly connected with the virtual machine;
control plane: the method is mainly responsible for establishing and maintaining the virtual tunnel and notifying the accessibility information of the host;
forwarding plane: a physical network carrying Overlay messages.
As shown in fig. 2A, the control node (Master node) includes:
database (etcd): etcd is a distributed Key-value-to-store (KV) database, commonly referred to as KV database, used to store related data in clusters.
Application program interface (Application Program Interface, API) service (API Server): APIServer is a unified portal to the cluster, operating in a software architecture style (Representational State Transfer, RESTful) while handed to etcd storage (which is the only component that has access to etcd). The API Server may provide authentication, authorization, access control, API registration, discovery, etc. mechanisms. API Server may be accessed through a command line tool (kubectl), a visualization panel (dashboard), or a software development kit (Software Development Kit, SDK), etc.
Node scheduling (Scheduler): schedulers are used to select node application deployments.
And the controller manager is used for processing the conventional background tasks in the cluster, one resource corresponds to one controller, and simultaneously monitors the state of the cluster to ensure that the actual state and the final state are consistent.
As shown in fig. 2A, each working Node (Node) includes:
node management module (kubelet): kubelet is equivalent to Master Node sending to Node representative, managing local container, reporting data to API Server.
Container run (Container Runtime): container Runtime is a container runtime environment in which Container Runtime multiple Pod, K8s clusters can run, supporting multiple container runtime environments and any software implementing a container runtime environment interface (Container Runtime Interface, CRI).
An implementation Service (Service) abstraction component (kube-proxy): kube-proxy is used to implement k8s cluster communication and load balancing.
When a Deployment (Deployment) scroll upgrade is performed in the K8s system, a new container group (Pod) is created first, then an old Pod is deleted, and two asynchronous operations exist in the process of deleting the old Pod: one is to delete the route forwarding information of the old Pod and the other is to delete the old Pod. Because the two operations are performed asynchronously, the sequence of the two asynchronous operations cannot be controlled, so that the situation that the old Pod is deleted after the route forwarding information of the old Pod is deleted is possible, and when the situation occurs, the situation that network traffic is forwarded to the deleted old Pod occurs, and the problem of connection timeout occurs.
As can be seen from fig. 2A above, the nodes of the K8s cluster have two roles, namely, a control node and a working node, and the relationship between the control node and the working node in the K8s cluster is shown in fig. 2B below, where the control node is responsible for management control of the whole cluster, including management control for a plurality of working nodes, and it should be noted that in fig. 2B, the control node includes 3 working nodes: working node 1, working node 2, and working node 3 are illustrated as examples, and in practice, more or fewer working nodes may be included within the K8s cluster, and embodiments of the present disclosure are not limited.
In the embodiment of the present disclosure, the container group update device may be a server cluster; the container group updating device may be one node or a plurality of nodes in the server cluster, for example, the container group updating device may be a device formed by the first node or the second node; the container group update device may also be a device for managing the server cluster independent of the server cluster. For example, the server cluster may be a K8s cluster as shown in fig. 2A, the first node and the second node may be the same node in the server cluster, or the first node and the second node may be different two nodes in the server cluster, for example, the first node and/or the second node may be any one of the control node and the plurality of working nodes shown in fig. 2A.
Since each node (host) in the server cluster typically runs multiple Pod (equivalent to multiple virtual machines), to distinguish each Pod, an IP address needs to be allocated to each container group of each node to distinguish each Pod in the server cluster, and when a client wants to access a service corresponding to a Pod, a working node accessed by the client may not be a node running the Pod at first, and at this moment, the accessed working node needs to know route forwarding information corresponding to the IP address of the Pod, so that traffic corresponding to an access request of the client can be forwarded to the Pod according to the route forwarding information. And then the route forwarding information corresponding to the IP address allocated by the Pod is stored in each node in the current server cluster, so that when any node receives an access request for the service corresponding to the Pod in other nodes, the Pod corresponding to the service can be found by querying the route forwarding table, and the traffic is forwarded to the Pod corresponding to the service. In order to implement such a scheme, a consistent routing forwarding table (iptables) needs to be stored on each node in the server cluster, and the routing forwarding table records the routing forwarding information of different Pod in the server, and can obtain the rule of traffic forwarding according to the routing forwarding table.
In performing a Deployment (scroll) scroll upgrade, a new Pod is created and an old Pod is deleted.
As shown in fig. 2C, in order to create a new Pod in the K8s system shown in fig. 2A during the deployment scroll upgrade, the process includes the following steps: 21. an application program service interface (API Server) receives a request to add a new Pod triggered by an administrator, which may be an administrator of the Server cluster. 22. The API Server stores the resource content corresponding to the new Pod into a database (etcd). 23. etcd adds the new Pod to the scheduler's queue. 24. Node scheduling (Scheduler) determines the appropriate working node for the new Pod. 25. And notifying a node management module (kubelet) on the determined working node that a new Pod needs to be scheduled onto the working node, the node management module on the working node creating the new Pod. 26. The Container Network Interface (CNI) assigns an IP address to the new Pod. 27. A Container Storage Interface (CSI) allocates storage space for the new Pod. 28. The container runtime interface CRI creates the new Pod, i.e. the Pod. 29. After the creation of a new Pod is completed, the node management module stores the IP address allocated for the new Pod in etcd. 30. And monitoring the Pod change information in the etcd. 31. The iptables on each node are updated by the implementation service abstraction component (kube-proxy) in each node. The updating of the iptables on each node may be adding route forwarding information corresponding to the IP address of the new Pod.
It can be seen that in the process of creating a new Pod, the route forwarding information corresponding to the IP address of the new Pod needs to be added in each node, so that the route forwarding information of the Pod is stored in each node.
Accordingly, after creating a new Pod is completed, an old Pod needs to be deleted, and in the process of deleting the old Pod, as shown in fig. 2D, in the process of performing deployment rolling upgrade, a flow diagram of deleting the old Pod in the K8s system shown in fig. 2A is shown, and the process includes the following steps: 211. an application program service interface (API Server) receives a request to delete an old Pod triggered by an administrator, which may be an administrator of the Server cluster. 221. The API Server changes the state of the old Pod in the database (etcd) to a termination state (Terminating). 231. The API Server adds the old Pod to the scheduler's queue. 241. Node scheduling (Scheduler) determines the working nodes that the old Pod is running. 251. And notifying a node management module (kubelet) on the determined working node that the old Pod needs to be deleted, the node management module on the working node triggers deletion of the old Pod, triggering asynchronous operations in CNI, CSI and CRI. 261. The CNI is the route forwarding information corresponding to the IP address of the old Pod deleted. 271. CSI deletes the stored data of the old Pod. 281. The old Pod is deleted in CRI. When the step 261 is executed, the route forwarding information corresponding to the IP address of the old Pod in each node in the server group needs to be deleted, and the specific deleting operation may be implemented by implementing a service abstraction component (kube-proxy) in each node to delete the route forwarding information corresponding to the IP address of the old Pod stored in the iptables on each node.
Wherein, the route forwarding information of the old Pod is also stored in each node, and the storing process can refer to the description in the process of creating a new Pod. In deleting the old Pod, there are two asynchronous operations: one is to delete the routing forwarding information of the old Pod and the other is to delete the old Pod from the node where the old Pod is located. Since each node has the routing forwarding information of the old Pod, when deleting the routing forwarding information of the old Pod, kube-proxy is required to delete the routing forwarding information corresponding to the IP address of the old Pod from iptables of each node, and such a process may require a period of time. If the asynchronous operation of deleting the old Pod from the node where the old Pod is located is already performed in this period of time, for the case that the route forwarding information corresponding to the IP address of the old Pod is not deleted in some nodes, the some nodes may continue forwarding traffic to the old Pod according to the route forwarding information corresponding to the IP address of the old Pod, which may cause a problem of connection timeout.
In order to solve the above-mentioned problems, the embodiments of the present disclosure provide a container group update apparatus and a container group update method, in a container group update scenario, a first container group (new Pod) for replacing a second container group (old Pod) running in a second node is created on a first node, and when route forwarding information of the second container group is triggered to be deleted from each node in a server cluster, load balancing information of a group in which the second container group is located can be obtained and a traffic allocation state of the second container group is determined therefrom, so that a situation that traffic is allocated to the second container group can be determined, when a traffic allocation state characterizes that traffic is no longer allocated to the second container group, it is explained that the route forwarding information of the second container group has been deleted from each node at this time, and when the second container group is deleted again, a situation that network traffic is forwarded to the old second container group that has been deleted will not occur any more, and a problem of connection timeout will not exist, thereby improving performance in performing a rolling update in a rolling update process of a device, and realizing a rolling update without perception of users in a K8s system.
As shown in fig. 3, a hardware configuration block diagram of a container group update apparatus 300 according to an embodiment of the present disclosure is provided, where the container group update apparatus 300 shown in fig. 3 includes: a communicator 310, a controller 320, a memory 330, a user input interface 340, a power supply 350, and the like.
Among them, the communicator 310 is a component for communicating with an external device or a server according to various communication protocol types. For example: the communicator 310 may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver.
The controller 320 includes at least one of a central processor (CentralProcessingUnit, CPU), a video processor, an audio processor, a graphic processor (GraphicsProcessingUnit, GPU), RAMRandomAccessMemory, RAM), a ROM (Read-only memory), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
Memory 330 includes non-volatile Memory in a computer-readable medium, random access Memory (random access Memory, RAM) and/or non-volatile Memory, etc., such as Read-Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium. Computer readable media include both non-transitory and non-transitory, removable and non-removable storage media. Storage media may embody any method or technology for storage of information, which may be computer readable instructions, data structures, program modules, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device.
A user input interface 340 for receiving user input commands. For a container group update device, a user may refer to a manager of a server cluster.
In some embodiments, the controller 320 is configured to: creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are container groups with different versions aiming at target services;
after the first container group is created, generating container group change information to trigger deleting route forwarding information of a second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is one container group running on a second node, and the first node and the second node are any two identical or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state indicates that the flow is no longer distributed to the second container group.
In some embodiments, the load balancing information includes traffic allocation information corresponding to IP addresses of the first container group and the second container group, respectively;
the controller 320 is specifically configured to allocate a first IP address to the first container set as route forwarding information of the first container set when the first container set is created on the first node, and save the route forwarding information of the first container set on each node in the server cluster, so that traffic corresponding to the target service is forwarded to the first container set according to the first IP address; after the first container group is created, container group change information is generated to trigger deleting a second IP address of the second container group from each node in the server cluster, and a flow distribution state corresponding to the second IP address is obtained from the load balancing information.
In some embodiments, the controller 320 is specifically configured to obtain the load balancing information of the group to which the second container group belongs when receiving the indication message for deleting the second container group.
In some embodiments, the second node is a working node in the server cluster, and the container group update apparatus 300 further includes:
A communicator 310 for: information interaction between the control node and the working node is realized;
the controller 320 is specifically configured to: transmitting the indication message from the control node to the second node through the communicator 310; and controlling the second node to respond to the indication message and acquire the load balancing information of the group to which the second container group belongs.
In some embodiments, the second node is a control node in the server cluster, and the container group update device further comprises:
a user input interface 340 for: receiving the indication message input by a user;
the controller 320 is specifically configured to obtain, in response to the indication message, load balancing information of the group to which the second container group belongs.
In some embodiments, the controller 320 is specifically configured to:
and when the flow distribution state of the second container group obtained last time represents that the flow is still distributed to the second container group, obtaining the load balancing information of the group to which the second container group belongs again, so that the flow distribution state of the second container group is obtained based on the load balancing information.
In some embodiments, the controller 320 is specifically configured to:
And waiting for a first time period when the flow is still distributed to the second container group according to the flow distribution state representation of the second container group acquired last time, acquiring load balancing information of the group to which the second container group belongs again after the first time period, and acquiring the flow distribution state of the second container group based on the load balancing information.
In some embodiments, the controller 320 is specifically configured to:
continuously obtaining the load balancing information of the group to which the second container group belongs for multiple times;
determining flow distribution states of a plurality of second container groups according to load balancing information acquired continuously and repeatedly;
and deleting the second container group when the flow distribution states of the plurality of the second container groups are all characterized in that the flow is still distributed to the second container group.
For a more detailed description of the present solution, the following description will be given by way of example with reference to the accompanying drawings, and it will be understood that the steps referred to in the following drawings may include more steps or fewer steps when actually implemented, and the order between the steps may also be different, so as to enable the container group updating method provided in the embodiments of the present disclosure.
The method for updating the container group provided by the embodiment of the disclosure may be implemented by a container group updating device, or a part of functional modules or a part of functional entities on the container group updating device.
As shown in fig. 4, a flowchart of a method for updating a container group according to an embodiment of the disclosure may include the following steps 401 to 406:
401. a first container group is created on a first node of a server cluster.
Wherein the server cluster includes, but is not limited to, a first node and a second node. The second node has an existing second set of containers running therein. The first node and the second node may be physical machines in the server, or may be virtual machines running on the physical machines.
The first container group (hereinafter also referred to as a first Pod) and the second container group (hereinafter also referred to as a second Pod) are different versions of container groups for a target service.
In some embodiments, when the first container group is created on the first node, a first IP address is allocated to the first container group as route forwarding information of the first container group, and route forwarding information of the first container group is stored on each node in the server cluster, so that traffic corresponding to the target service is forwarded to the first container group according to the first IP address.
It should be noted that, the description of creating the first Pod on the first node of the server cluster may refer to the description of creating the new Pod in fig. 2C, which is not described herein.
402. After the first container group is created, container group change information is generated to trigger the deletion of the routing forwarding information for the second container group from each node in the server cluster.
In some embodiments, after the first container group is created, container group change information is generated to trigger deleting the second IP address of the second container group from each node in the server cluster, and a traffic allocation state corresponding to the second IP address is obtained from the load balancing information.
After the first Pod is created on the first node, kubelet on the first node stores the route forwarding information of the first Pod into etcd, etcd generates container group change information, and after the container group change information for creating the first Pod is monitored, the route forwarding information of the first Pod is triggered to be deleted from each node in the server cluster. Specifically, the kube-proxy of each node may delete the route forwarding information corresponding to the IP address of the old Pod from the iptables of each node.
403. And obtaining load balancing information of the group to which the second container group belongs.
Wherein the group comprises a first container group and a second container group. The load balancing information comprises flow distribution information corresponding to the IP addresses of the first container group and the second container group respectively.
In some embodiments, the group may further include other container groups besides the first container group and the second container group, and load balancing may be achieved by scheduling traffic allocation situations of different container groups in the group.
The second container group is a container group running on a second node, and the first node and the second node are any two identical or different nodes in the server cluster.
The load balancing information may be that kube-proxy on the second node first obtains route forwarding information of each Pod in iptables on the second node, then determines route forwarding information of pods corresponding to the same service according to the route forwarding information of each Pod, and finally performs load balancing based on the route forwarding information of pods corresponding to the same service in traffic distribution to obtain the load balancing information.
In some embodiments, after 402, the load balancing information of the group to which the second container group belongs may be obtained when an indication message for deleting the second container group is received.
The load balancing information includes flow distribution information corresponding to IP addresses of each container group in the group, and in some embodiments, the obtaining load balancing information of the second container group may be: acquiring a flow distribution state corresponding to the second IP address from the load balancing information; wherein the second IP address is an IP address of the second container group.
Exemplary, as shown in table 1, is a schematic table of the container group, IP address and corresponding traffic allocation status in the load balancing information in the group. As shown in table 1, each of the plurality of container groups corresponds to an IP address, and each IP address corresponds to a traffic allocation status (referred to as a target status).
TABLE 1
Pod IP address target state
Pod1 IP Address 1 Flow end state
Pod2 IP Address 2 Non-flow end state
Pod3 IP Address 3 Non-flow end state
Pod4 IP Address 4 Non-flow end state
As shown in table 1, it can be seen that the group includes 4 Pod, where the traffic allocation state corresponding to Pod1 is a traffic end state, and at this time Pod1 no longer has network traffic flowing in, which indicates that route forwarding information of Pod1 in iptables of each node in the server cluster has been deleted; correspondingly, the traffic distribution states corresponding to Pod2, pod3 and Pod4 are non-traffic end states, and at this time, pod2, pod3 and Pod4 still have network traffic flowing in, which means that route forwarding information of Pod2, pod3 and Pod4 in iptables of each node in the server cluster is not deleted, and load balancing can be further performed among Pod2, pod3 and Pod 4.
The method for updating the container group provided in the embodiment of the disclosure may be applied to a working node or a control node in a K8s cluster, that is, the second Pod may be a Pod running on the working node, and the second Pod may also be a Pod running on the control node.
Exemplary, in conjunction with the architecture of the K8s cluster shown in fig. 2A, as shown in fig. 5, a flowchart of deleting the second Pod is provided in an embodiment of the disclosure. In fig. 5, when deleting the second Pod in the service rolling update process in the K8s cluster, the API service (API Server) may first receive an indication message (may be an instruction) triggered by an administrator to delete the second Pod (i.e. a second container group), after receiving the indication message to delete the second Pod, synchronously change the state of the second Pod in the etcd to a termination state (Terminating), the second Pod is added to a queue of the Scheduler by a node scheduling (Scheduler), and the API Server may send the indication message to delete the second Pod to a node management module (kubelet) of a node where the second Pod is located, and execute an asynchronous operation after the kubelet receives the indication message to delete the second Pod, where the asynchronous operation may include an operation of deleting the second Pod by a Container Runtime Interface (CRI), an operation of deleting routing forwarding information of the second Pod by a Container Network Interface (CNI), and an operation of deleting storage information of the second Pod by a Container Storage Interface (CSI).
The second node is a working node in the server cluster, the second Pod is a Pod running on the working node, the control node can receive an indication message input by a user, and the control node responds to the indication message to acquire load balancing information of the group to which the second container group belongs. That is, the entire workflow shown in fig. 5 described above may be completed in the control node. It should be noted that, in addition to the modules shown in fig. 2A, the control node may further include related modules in the working node, so as to implement the entire workflow shown in fig. 5.
The second node is a control node in the server cluster, the second Pod is a Pod running on the control node, when deleting the second container group, the working node can receive an indication message sent by the control node, and the working node responds to the indication message to obtain load balancing information of the group to which the second container group belongs. That is, the entire workflow shown in fig. 5 described above needs to be completed by the control node and the work node together.
As shown in fig. 6, an exemplary flow chart for deleting a second Pod provided in this embodiment of the present disclosure is shown, in fig. 6, when deleting the second Pod in a service roll update process in a K8s cluster, an API service (API Server) in a control node may first receive an indication message (may be an instruction) triggered by an administrator to delete the second Pod (i.e. a second container group), after receiving the indication message for deleting the second Pod, the state of the second Pod in the etcd of the control node is synchronously changed to a termination state (Terminating), the second Pod is added to a queue of a Scheduler by a node in the control node, the API Server may send the indication message for deleting the second Pod to a node management module (kubrelet) of a working node where the second Pod is located, after the kubrelet receives the indication message for deleting the second Pod, an asynchronous operation may be performed, an operation for deleting the second Pod by a Container Runtime Interface (CRI) may include an operation for deleting the second Pod, and an operation for forwarding information of the second Pod by a container network interface (CSI) for deleting the second Pod.
404. And determining the flow distribution state of the second container group according to the load balancing information.
By way of example, the traffic distribution status of the second container group may be determined from the load balancing information as shown in table 1.
405. And judging whether the flow distribution state of the second container group indicates that the flow is no longer distributed to the second container group.
In the event that it is determined that the flow assignment status of the second container group characterizes no more flow to the second container group, e.g., the flow assignment status is a flow end status, performing 406, described below, deleting the second container group; and under the condition that the flow distribution state of the second container group is determined to represent that the flow is still distributed to the second container group, returning to the step 402, and acquiring the load balancing information of the group to which the second container group belongs again.
In some embodiments, when the flow is still allocated to the second container group according to the flow allocation status characterization of the second container group acquired last time, the load balancing information of the group to which the second container group belongs is acquired again, so that the flow allocation status of the second container group is acquired based on the load balancing information.
In some embodiments, waiting for a first time period while traffic is still being allocated to the second container group according to the last acquired traffic allocation status characterization of the second container group, acquiring load balancing information of the group to which the second container group belongs again after the first time period, and acquiring the traffic allocation status of the second container group based on the load balancing information.
For example, the interval duration for acquiring the load balancing information may be preset to be a first duration, that is, the load balancing information of the group to which the second container group belongs may be acquired once per the first duration.
In the above embodiment, when the obtained flow allocation status of the second container group indicates that the flow is still allocated to the second container group, it is indicated that there may be some nodes in the current server cluster that do not delete the route forwarding information corresponding to the second container group, and after waiting for the first period, load balancing information of the group to which the second container group belongs may be obtained again, and the flow allocation status of the second container group is obtained based on the load balancing information, and then the judgment is performed again, until the flow allocation status of the second container group indicates that no flow is allocated to the second container group any more, it is confirmed that all nodes in the server cluster delete the route forwarding information corresponding to the second container group, and at this time, the second container group may be deleted.
406. The second container group is deleted.
According to the flow of deleting the second Pod shown in fig. 5 and 6, it can be seen that when the request of the administrator to delete the second Pod reaches the API Server, kubelet receives notification of deleting the second Pod, kubelet initiates two asynchronous actions, one responsible for deleting the route forwarding information of the second Pod and one responsible for deleting the second Pod.
(1) Deleting the route forwarding information of the second Pod:
an Endpoint (Controller) will snoop the event of the API Server and then delete the second Pod from the corresponding Endpoint. Wherein an Endpoint is used to connect a service and a Pod, and an Endpoint is a list of IP addresses and ports (ports) of one service.
Further, after Endpoint Controller processing is completed, a request is sent to API Server, kube-proxy, core domain name system (Core Domain Name System, coreDNS) listens for this event, kube-proxy updates iptables on the node, and CoreDNS updates domain name system (Domain Name System, abbreviated DNS).
(2) Deleting the second Pod procedure:
the Kubelet overhears the event of the API Server and begins sending a termination signal (sigtherm signal) to the process (application) running on the second Pod.
The two workflows are asynchronous, the two workflows do not have a dependency relationship, the two workflows work simultaneously after receiving a deletion instruction, if the route forwarding information of the second Pod is not deleted from all nodes in the server cluster when the second Pod is deleted, the traffic of the second Pod is forwarded to the deleted Pod, the connection is overtime, the user is disconnected suddenly by the connected service, the new user is not connected to the service, and the user use and experience are poor.
In the related art, K8s provides two hooks (hooks) for Pod in its life cycle, which are two events after the start of the container (postStart) and before the termination of the container (preStop), so as to ensure that the route forwarding information of the second Pod is deleted before the second Pod is deleted, and in the related art, an implementation manner of deleting the second Pod with waiting time is provided.
As shown in fig. 7, an exemplary flowchart of a method for deleting a second Pod by setting a waiting time according to an embodiment of the disclosure may include the following steps:
701. and receiving an indication message for deleting the second Pod.
Before the node receives the indication message to delete the second Pod, the following 702 and 703 are performed.
702. A pre-stop hook (PreStop hook) is executed to set a waiting time.
703. And deleting the second Pod after the waiting time.
And when the purpose of setting the waiting time is to reserve time, the second Pod route forwarding information on each node in the server cluster is deleted. The above waiting time may be set to 5s to 10s, for example.
Since the two operations of deleting the route forwarding information of the second Pod on each node in the server cluster and deleting the second Pod are processed asynchronously, when the above solution is used, it is necessary to ensure that the route forwarding information of the second Pod on all nodes in the server cluster is deleted before the second Pod is deleted.
As shown in fig. 8, for an execution timing diagram of two asynchronous operations provided in this embodiment of the present disclosure, it is assumed that an instruction message for executing deleting a second Pod is received at time T1, where a wait time is set to 10s before executing a container is terminated, the operation for deleting the second Pod is executed at time T3, and if the operation for deleting the route forwarding information of the second Pod on each node in the server cluster is already executed at time T2 shown in fig. 8, it is ensured that the route forwarding information of the second Pod on all nodes in the server cluster is deleted before deleting the second Pod.
Exemplary, as shown in fig. 9, a timing diagram of execution of two other asynchronous operations according to an embodiment of the disclosure is provided; assuming that an instruction message for executing the second Pod is received at time T1, at this time, a hook PreStop before the container is terminated is executed, the waiting time is set to be 10s, the operation for deleting the second Pod is executed at time T3, and if the operation for deleting the route forwarding information of the second Pod is executed at time T4 shown in fig. 9, it cannot be guaranteed that the route forwarding information of the second Pod on all nodes in the server cluster is deleted before the second Pod is deleted.
In the related art, the above solution has the following drawbacks:
1. the waiting time is irregular and can be set only according to experience;
2. the online whole service needs to be updated every time the waiting time is adjusted, so that the influence range is large;
3. as the number of servers in the K8s cluster increases, the waiting time needs to be prolonged, so the waiting time needs to be adjusted at any time;
4. otherwise, the second Pod is not required to be deleted after waiting for 10 seconds, and the second Pod is also deleted after waiting for 10 seconds;
for example, as shown in fig. 8, the route forwarding information of the second Pod has been deleted after 2s at time T1, but the second Pod is still deleted until time T3 after 10 s.
5. The scroll update time increases.
In the embodiment of the present disclosure, in order to achieve a better effect, the traffic distribution state of the second Pod is confirmed by obtaining the load balancing information of the second Pod group through the methods 401 to 405.
For example, preStop hook may be performed as well before Pod stops, but instead of setting the wait time, enabling 1 sidecar container group (sidecar Pod) ensures the sequency of two asynchronous operations. As shown in fig. 10, a flowchart of a method for deleting a second Pod through load balancing information according to an embodiment of the present disclosure may include the following steps 1001 to 1005:
1001. And receiving an indication message for deleting the second Pod.
1002. Executing Prestop hook and enabling the target script of the sidecar Pod.
Wherein the target script is configured to acquire a traffic distribution state (target state) of the current IP from load balancing information of a group (target groups) to which the second Pod belongs every few seconds
Wherein each target group in the K8s cluster is used to route the request to one or more registered targets. When creating each listener rule, a target group and condition are specified. When the rule condition is satisfied, the traffic will be forwarded to the corresponding target group. Different target groups may be created for different types of requests.
1003. And executing the target script, and acquiring the flow distribution state of the current IP from the load balancing information of the group to which the second Pod belongs every few seconds.
1004. Whether the flow distribution state is no longer distributing flow to the second set of containers.
If the traffic allocation status is in the reporting state, no traffic is allocated to the second container group, and the execution target scenario is exited, and the following 905 is executed.
If the traffic distribution state is not in the reporting state, continuing to detect, returning to the step 903, and obtaining the current traffic distribution state of the IP address of the second Pod from the load balancing information of the group to which the second Pod belongs.
The traffic distribution state is in a forwarding state, which indicates that the route forwarding information of the second Pod is deleted, and no traffic flows in.
1005. And deleting the second Pod.
The beneficial effects of the foregoing embodiments in the embodiments of the disclosure may include:
1. the waiting time does not need to be configured any more, and the second Pod can be deleted after the second Pod is determined to have no forwarding flow according to the flow forwarding condition of actual load balancing;
2. only one on-line configuration need to be changed;
3. is not affected by the expansion of the range of the K8s cluster;
4. the scroll update time is short.
The method for updating the container group can acquire the load balancing information of the group to which the second container group belongs, wherein the group comprises a plurality of container groups, and the second container group is one container group running on a node; determining a flow distribution state of the second container group according to the load balancing information; the second set of containers is deleted in the event that the flow allocation status of the second set of containers characterizes that no more flow is allocated to the second set of containers. According to the scheme, the flow distribution state of the second container group can be determined according to the load balancing information, and under the condition that the flow distribution state of the second container group represents that the flow is not distributed to the second container group any more, all nodes in the server cluster are known to delete the route forwarding information of the second container group, and then delete the old Pod, the condition that the network flow is forwarded to the deleted old Pod is avoided, the problem of connection timeout does not exist, and therefore performance in the process of executing the discovery rolling upgrade in the K8s system is improved, and the rolling upgrade without perception of a user is realized.
In some embodiments, in the process of executing 1001 to 1005, the load balancing information of the group to which the second container group belongs may be further acquired multiple times in succession; determining flow distribution states of a plurality of second container groups according to the load balancing information which is acquired continuously and repeatedly; and deleting the second container group when the flow distribution states of the plurality of second container groups are all characterized in that the flow is still distributed to the second container group.
In the actual implementation process, the fact that misjudgment exists in the flow distribution state of the second container group is judged according to the load balancing information obtained at one time is considered, so that the flow distribution states of a plurality of second container groups are determined according to the load balancing information obtained continuously for a plurality of times by continuously obtaining the load balancing information of the group to which the second container group belongs; and deleting the second container group when the flow distribution states of the plurality of second container groups are all flow end states. The method can improve the accuracy of judgment.
The embodiments of the present disclosure provide a computer readable storage medium, on which a computer program is stored, where the computer program when executed by a processor implements each process executed by the container group updating method of the container group updating method described above, and the same technical effects can be achieved, so that repetition is avoided, and detailed description is omitted herein.
The computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or the like.
The present disclosure provides a computer program product comprising a computer program which, when run on a computer, causes the computer to implement the container group updating method of container groups described above.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the above discussion in some examples is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A container group updating apparatus, characterized by comprising: a controller configured to: creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are container groups with different versions aiming at target services, and the server cluster comprises a control node and a plurality of working nodes;
After the first container group is created, generating container group change information to trigger deleting route forwarding information of the second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is one container group running on a second node, and the first node and the second node are any two identical or different nodes in the server cluster;
acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state indicates that the flow is no longer distributed to the second container group.
2. The container group updating apparatus according to claim 1, wherein the load balancing information includes traffic allocation information to which IP addresses of the first container group and the second container group respectively correspond;
the controller is specifically configured to:
when the first container group is created on the first node, a first IP address is allocated to the first container group as routing forwarding information of the first container group, and the routing forwarding information of the first container group is stored on each node in the server cluster, so that traffic corresponding to the target service is forwarded to the first container group according to the first IP address; after the first container group is created, container group change information is generated to trigger deleting a second IP address of the second container group from each node in the server cluster, and a flow distribution state corresponding to the second IP address is obtained from the load balancing information.
3. The container group updating apparatus according to claim 1, wherein the controller is specifically configured to:
and under the condition that the indication message for deleting the second container group is received, acquiring the load balancing information of the group to which the second container group belongs.
4. A container group update apparatus according to claim 3, wherein the second node is the working node in the server cluster, the container group update apparatus further comprising:
a communicator configured to: information interaction between the control node and the working node is realized;
the controller is specifically configured to: transmitting the indication message from the control node to the second node by the communicator; and controlling the second node to respond to the indication message and acquire the load balancing information of the group to which the second container group belongs.
5. A container group update apparatus according to claim 3, wherein the second node is the control node in the server cluster, the container group update apparatus further comprising:
a user input interface configured to: receiving the indication message input by a user;
The controller is specifically configured to respond to the indication message and obtain load balancing information of the group to which the second container group belongs.
6. The container group updating apparatus according to claim 1, wherein the controller is specifically configured to:
and when the flow distribution state of the second container group obtained last time represents that the flow is still distributed to the second container group, obtaining the load balancing information of the group to which the second container group belongs again, so that the flow distribution state of the second container group is obtained based on the load balancing information.
7. The container group updating apparatus according to claim 6, wherein the controller is specifically configured to:
and waiting for a first time period when the flow is still distributed to the second container group according to the flow distribution state representation of the second container group acquired last time, acquiring load balancing information of the group to which the second container group belongs again after the first time period, and acquiring the flow distribution state of the second container group based on the load balancing information.
8. The container group updating apparatus according to claim 1, wherein the controller is specifically configured to:
Continuously obtaining the load balancing information of the group to which the second container group belongs for multiple times;
determining flow distribution states of a plurality of second container groups according to load balancing information acquired continuously and repeatedly;
and deleting the second container group when the flow distribution states of the plurality of the second container groups are all characterized in that the flow is still distributed to the second container group.
9. A method of updating a set of containers, comprising:
creating a first container group on a first node of a server cluster, wherein the first container group and a second container group existing in the server cluster are container groups with different versions aiming at target services, and the server cluster comprises a control node and a plurality of working nodes;
after the first container group is created, generating container group change information to trigger deleting route forwarding information of the second container group from each node in the server cluster, and acquiring load balancing information of a group to which the second container group belongs, wherein the group comprises the first container group and the second container group, the second container group is one container group running on a second node, and the first node and the second node are any two identical or different nodes in the server cluster;
Acquiring the flow distribution state of the second container group from the load balancing information;
and deleting the second container group when the flow distribution state indicates that the flow is no longer distributed to the second container group.
10. The method of claim 9, wherein the load balancing information includes traffic allocation information corresponding to IP addresses of the first container group and the second container group, respectively;
the method further comprises the steps of:
when the first container group is created on the first node, a first IP address is allocated to the first container group as routing forwarding information of the first container group, and the routing forwarding information of the first container group is stored on each node in the server cluster, so that traffic corresponding to the target service is forwarded to the first container group according to the first IP address; the generating container group change information to trigger deleting the routing forwarding information of the second container group from each node in the server cluster includes:
generating container group change information to trigger deletion of a second IP address of the second container group from each node in the server cluster;
The obtaining the flow distribution state of the second container group from the load balancing information includes:
and acquiring a flow distribution state corresponding to the second IP address from the load balancing information.
CN202210528978.XA 2022-05-16 2022-05-16 Container group updating equipment and container group updating method Active CN114938375B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210528978.XA CN114938375B (en) 2022-05-16 2022-05-16 Container group updating equipment and container group updating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210528978.XA CN114938375B (en) 2022-05-16 2022-05-16 Container group updating equipment and container group updating method

Publications (2)

Publication Number Publication Date
CN114938375A CN114938375A (en) 2022-08-23
CN114938375B true CN114938375B (en) 2023-06-02

Family

ID=82865769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210528978.XA Active CN114938375B (en) 2022-05-16 2022-05-16 Container group updating equipment and container group updating method

Country Status (1)

Country Link
CN (1) CN114938375B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116233070A (en) * 2023-03-20 2023-06-06 北京奇艺世纪科技有限公司 Distribution system and distribution method for static IP addresses of clusters

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109067828A (en) * 2018-06-22 2018-12-21 杭州才云科技有限公司 Based on the more cluster construction methods of Kubernetes and OpenStack container cloud platform, medium, equipment
CN109150608A (en) * 2018-08-22 2019-01-04 苏州思必驰信息科技有限公司 Interface service upgrade method and system for voice dialogue platform
CN110213309A (en) * 2018-03-13 2019-09-06 腾讯科技(深圳)有限公司 A kind of method, equipment and the storage medium of binding relationship management
CN111163189A (en) * 2020-01-07 2020-05-15 上海道客网络科技有限公司 IP monitoring and recycling system and method based on network name space management and control
CN111258609A (en) * 2020-01-19 2020-06-09 北京百度网讯科技有限公司 Upgrading method and device of Kubernetes cluster, electronic equipment and medium
CN113254165A (en) * 2021-07-09 2021-08-13 易纳购科技(北京)有限公司 Load flow distribution method and device for virtual machine and container, and computer equipment
CN113364727A (en) * 2020-03-05 2021-09-07 北京金山云网络技术有限公司 Container cluster system, container console and server
CN113656168A (en) * 2021-07-16 2021-11-16 新浪网技术(中国)有限公司 Method, system, medium and equipment for automatic disaster recovery and scheduling of traffic
CN113835836A (en) * 2021-09-23 2021-12-24 证通股份有限公司 System, method, computer device and medium for dynamically publishing container service
CN113923257A (en) * 2021-09-22 2022-01-11 北京金山云网络技术有限公司 Container group instance termination and creation method, device, electronic equipment and storage medium
CN113949707A (en) * 2021-09-30 2022-01-18 上海浦东发展银行股份有限公司 OpenResty and K8S-based container cloud service discovery and load balancing method
CN114385349A (en) * 2021-12-06 2022-04-22 阿里巴巴(中国)有限公司 Container group deployment method and device
CN114461303A (en) * 2022-02-10 2022-05-10 京东科技信息技术有限公司 Method and device for accessing cluster internal service

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210011816A1 (en) * 2019-07-10 2021-01-14 Commvault Systems, Inc. Preparing containerized applications for backup using a backup services container in a container-orchestration pod
US20210072966A1 (en) * 2019-09-05 2021-03-11 International Business Machines Corporation Method and system for service rolling-updating in a container orchestrator system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110213309A (en) * 2018-03-13 2019-09-06 腾讯科技(深圳)有限公司 A kind of method, equipment and the storage medium of binding relationship management
CN109067828A (en) * 2018-06-22 2018-12-21 杭州才云科技有限公司 Based on the more cluster construction methods of Kubernetes and OpenStack container cloud platform, medium, equipment
CN109150608A (en) * 2018-08-22 2019-01-04 苏州思必驰信息科技有限公司 Interface service upgrade method and system for voice dialogue platform
CN111163189A (en) * 2020-01-07 2020-05-15 上海道客网络科技有限公司 IP monitoring and recycling system and method based on network name space management and control
CN111258609A (en) * 2020-01-19 2020-06-09 北京百度网讯科技有限公司 Upgrading method and device of Kubernetes cluster, electronic equipment and medium
CN113364727A (en) * 2020-03-05 2021-09-07 北京金山云网络技术有限公司 Container cluster system, container console and server
CN113254165A (en) * 2021-07-09 2021-08-13 易纳购科技(北京)有限公司 Load flow distribution method and device for virtual machine and container, and computer equipment
CN113656168A (en) * 2021-07-16 2021-11-16 新浪网技术(中国)有限公司 Method, system, medium and equipment for automatic disaster recovery and scheduling of traffic
CN113923257A (en) * 2021-09-22 2022-01-11 北京金山云网络技术有限公司 Container group instance termination and creation method, device, electronic equipment and storage medium
CN113835836A (en) * 2021-09-23 2021-12-24 证通股份有限公司 System, method, computer device and medium for dynamically publishing container service
CN113949707A (en) * 2021-09-30 2022-01-18 上海浦东发展银行股份有限公司 OpenResty and K8S-based container cloud service discovery and load balancing method
CN114385349A (en) * 2021-12-06 2022-04-22 阿里巴巴(中国)有限公司 Container group deployment method and device
CN114461303A (en) * 2022-02-10 2022-05-10 京东科技信息技术有限公司 Method and device for accessing cluster internal service

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Nguyen Nguyen ; Taehong Kim.Toward Highly scalable load balancing in Kubernetes clusters.《IEEE》.2020,全文. *
基于容器的NFV平台关键技术研究与实现;刘彪;《中国优秀硕士学位论文全文数据库》;全文 *

Also Published As

Publication number Publication date
CN114938375A (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN110113441B (en) Computer equipment, system and method for realizing load balance
US9999030B2 (en) Resource provisioning method
US10701139B2 (en) Life cycle management method and apparatus
CN111542064B (en) Container arrangement management system and arrangement method for wireless access network
US10129096B2 (en) Commissioning/decommissioning networks in orchestrated or software-defined computing environments
CN111880902A (en) Pod creation method, device, equipment and readable storage medium
US9912633B2 (en) Selective IP address allocation for probes that do not have assigned IP addresses
CN108777640B (en) Server detection method, device, system and storage medium
TW201444320A (en) Setup method and system for client and server environment
US10884880B2 (en) Method for transmitting request message and apparatus
CN113810230B (en) Method, device and system for carrying out network configuration on containers in container cluster
CN110716787A (en) Container address setting method, apparatus, and computer-readable storage medium
US20220318071A1 (en) Load balancing method and related device
CN102497409A (en) Resource management method for cloud computing system
CN113382077B (en) Micro-service scheduling method, micro-service scheduling device, computer equipment and storage medium
CN107809495B (en) Address management method and device
CN114938375B (en) Container group updating equipment and container group updating method
WO2022267646A1 (en) Pod deployment method and apparatus
WO2021248972A1 (en) Default gateway management method, gateway manager, server, and storage medium
CN111163140A (en) Method, apparatus and computer readable storage medium for resource acquisition and allocation
CN108881460B (en) Method and device for realizing unified monitoring of cloud platform
EP3860050B1 (en) Commissioning/decommissioning networks in software-defined computing environments
CN109067573B (en) Traffic scheduling method and device
CN114615268B (en) Service network, monitoring node, container node and equipment based on Kubernetes cluster
CN111404978A (en) Data storage method and cloud storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant