CN116132542A - Container network management method, container network plug-in and related equipment - Google Patents

Container network management method, container network plug-in and related equipment Download PDF

Info

Publication number
CN116132542A
CN116132542A CN202310078084.XA CN202310078084A CN116132542A CN 116132542 A CN116132542 A CN 116132542A CN 202310078084 A CN202310078084 A CN 202310078084A CN 116132542 A CN116132542 A CN 116132542A
Authority
CN
China
Prior art keywords
container
network
address
virtual
occupied
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310078084.XA
Other languages
Chinese (zh)
Inventor
唐伟志
杨阳
吕斌
刘玄飞
黄学艺
唐家伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guosen Securities Co ltd
Original Assignee
Guosen Securities Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guosen Securities Co ltd filed Critical Guosen Securities Co ltd
Priority to CN202310078084.XA priority Critical patent/CN116132542A/en
Publication of CN116132542A publication Critical patent/CN116132542A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention relates to the technical field of networks, and discloses a container network management method, a container network plug-in and related equipment, wherein the method comprises the following steps: acquiring a container creation request; the container creation request comprises an IP address to be occupied; when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, creating a target container corresponding to the IP address to be occupied in a container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address; monitoring network resource change information of the container cluster; and respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards the flow for the container. By the mode, the embodiment of the invention improves the performance and availability of the container network.

Description

Container network management method, container network plug-in and related equipment
Technical Field
The embodiment of the invention relates to the technical field of networks, in particular to a container network management method, a container network plug-in and related equipment.
Background
The container is a lightweight, portable, self-contained software packaging technology that allows applications to run in the same manner almost anywhere. With the continued development of container technology, the application of container chemistry has become a major trend. The container network technology is used as one of container bottom layer technologies, and provides functions of IP address management, network connection, network isolation, flow visualization and the like for the container.
The container network interface (Container Network Interface, CNI) is a standard, universal interface to the container network, and can be understood as a standardized protocol for the container network. CNIs are used to connect container management systems, such as application container engine (Docker) systems, container orchestration engine (K8S) systems, unified container management (meso) systems, etc., with network plug-ins. There are a number of implementations of existing container networks, such as flannel, calico, kube-OVN, weave, ipvlan, etc.
The inventors found in the implementation of the prior art that: the existing CNI scheme has the problems of poor adaptation degree of container network management and service management requirements, inability of fixing the container IP, low availability and performance of the container network, and the like, so that the user experience of application containerization is poor.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a container network management method, a container network plug-in, and related devices, which are used to solve the problems of poor availability and performance of a container network in the prior art.
According to an aspect of an embodiment of the present invention, there is provided a container network management method, the method being based on a container network plug-in comprising a virtual bridge; the container network plug-in is applied to a host node of the container cluster; at least one container is deployed on the host node; the method comprises the following steps:
acquiring a container creation request; the container creation request comprises an IP address to be occupied;
when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address;
monitoring network resource change information of the container cluster; the network resource change information comprises IP address change information of each container;
Respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards traffic for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the containers.
In an alternative manner, at least one of the virtual bridges is deployed on one of the host nodes; at least one virtual network card is mounted on one virtual network bridge; the at least one container corresponds to at least one VLAN; the network stack protocol comprises a routing table rule and an iptables rule; the routing table rule is used for representing a first routing relation among VLANs where the IP addresses of the containers are located; the iptables rule is used for representing the conversion relation between the IP address of each container and the network card address of the virtual network card; the flow table rule is used for representing a second routing relation between the network card address and the VLAN.
In an alternative manner, the first routing relationship includes sending a data packet from the IP address of the sender to the virtual bridge to forward the data packet to the IP address of the receiver through the virtual bridge when the IP addresses of the sender and the receiver are within the same VLAN; the second routing relationship includes sending the data packet from the IP address of the sender to the virtual network card when the IP addresses of the sender and the receiver are located in different VLANs, so as to send the data packet to the IP address of the receiver through the virtual network card.
In an alternative manner, the virtual bridge includes a production network virtual bridge and a management network virtual bridge; the production network virtual network bridge is provided with a plurality of production network virtual network cards; the management network virtual network bridge is provided with a plurality of management network virtual network cards; the first VLAN where the container connected with the production network virtual bridge is located and the second VLAN where the container connected with the management network virtual bridge is located are isolated from each other.
In an alternative manner, the occupancy state information includes allocation and occupancy state and occupancy container information; the method further comprises the steps of:
acquiring an IP pre-allocation request; the IP pre-allocation request comprises IP information to be allocated and a user identifier;
creating the IP resource pool corresponding to the user identifier according to the IP information to be allocated; wherein, in the IP resource pool, the allocation and occupation state is marked as allocated unoccupied, and the occupation container information is marked as empty;
after the target container corresponding to the IP address to be occupied is created in the container cluster, the method comprises the following steps:
and marking the allocation and occupation state corresponding to the IP address to be occupied as allocated occupied, and marking the occupied container information corresponding to the IP address to be occupied as the container identifier of the target container.
In an alternative, the method further comprises:
updating the IP resource pool in real time according to the network resource change information; and when the target container is determined to be deleted, marking the allocation and occupation state of the optional IP address corresponding to the target container as unused in the IP resource pool, and marking the occupation container information of the optional IP address corresponding to the target container as empty.
In an alternative, the method further comprises:
and when the target container is determined to be rebuilt after deletion, marking the allocation and occupation state of the optional IP address corresponding to the target container as allocated occupied, and marking the occupied container information as a container identifier of the target container.
According to another aspect of an embodiment of the present invention, there is provided a container network plug-in comprising a virtual bridge; the container network plug-in is applied to a host node of the container cluster; at least one container is deployed on the host node; the container network plug-in includes:
the network configuration module is used for acquiring a container creation request; the container creation request comprises an IP address to be occupied;
The network configuration module is further configured to create a target container corresponding to the IP address to be occupied in the container cluster when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address;
the network configuration module is further used for monitoring the network resource change information of the container cluster; the network resource change information comprises IP address change information of each container;
the network configuration module is further configured to configure a network stack protocol of the host node and a flow table rule of the virtual bridge according to the network resource change information; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the at least containers;
the virtual network bridge is configured to forward traffic for the container according to the flow table rule.
According to another aspect of an embodiment of the present invention, there is provided a container network management apparatus including:
The device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform operations of an embodiment of a container network management method as described in any one of the preceding claims.
According to yet another aspect of an embodiment of the present invention, there is provided a computer-readable storage medium having stored therein at least one executable instruction for causing a container network management device to perform the operations of the container network management method embodiment of any one of the preceding claims.
The embodiment of the invention acquires a container creation request; the container creation request comprises an IP address to be occupied; when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; one of the object attributes includes occupancy state information for one of the selectable IP addresses. Thus, the maintenance of object attributes of custom class type objects representing IP resources can be used to implement the management of stateful fixed container IP. Monitoring the network resource change information of the container cluster in real time after the container is created; the network resource change information includes IP address change information of each container. Finally, respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards the flow for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize the routing relationship between the virtual bridge and the IP address of the container, thereby enabling cross-host, cross-container access. The embodiment of the invention can realize the binding of the container and the fixed IP on one hand, is convenient for the operation and maintenance of the container, and on the other hand, can update the network stack protocol of the host node and the flow table rule of the virtual network bridge in real time according to the change of the network resource of the container cluster, thereby realizing the flow forwarding of the corresponding route for the container by the virtual network bridge according to the flow table rule and realizing the network configuration of the container cluster across hosts and containers.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific embodiments of the present invention are given for clarity and understanding.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a schematic flow chart of a container network management method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating container IP address management in the container network management method according to the embodiment of the present invention;
fig. 3 is a schematic flow diagram of a container network management method according to an embodiment of the present invention when accessing a container with a VLAN;
fig. 4 is a schematic flow diagram of a container network management method according to an embodiment of the present invention when a container is accessed across VLANs;
fig. 5 is a schematic flow chart of a container network management method according to an embodiment of the present invention when a management component accesses a container;
fig. 6 is a schematic diagram of a container network architecture on which a container network management method according to an embodiment of the present invention is based;
FIG. 7 is a schematic diagram of a container network plug-in provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of a container network plug-in provided in accordance with another embodiment of the present invention;
fig. 9 shows a schematic structural diagram of a container network device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
Description of related nouns:
a container: a lightweight, portable, self-contained software packaging technique encapsulates relevant information needed for an application so that the application can run in the same manner almost anywhere.
Kubernetes: the K8s is an open source application for managing containerization on a plurality of hosts in the cloud platform, and the purpose of the K8s is to enable the application deploying containerization to simply and efficiently provide a mechanism for application deployment, planning, updating and maintenance. The open source item K8s based on OpenStack is one of the most widely used container cluster management systems at present. The types of resource objects common in K8s can be categorized as workload-based resources (workload): pod, replicaSet, deployment, statefuSet, daemonSet, job, cronJob; service discovery and load balancing type resource: service, ingress; configuration and storage type resources: volume, CSI (container storage interface, which can extend a wide variety of third party storage volumes); special types of storage volumes: confiMap (type of resource used when configuring the center), secret (save sensitive data), downwardAPI (export information in external environment to container); cluster-type resources: namespace, node, role, clusterRole, roleBinding, clusterRoleBinding and metadata-type resources: HPA, podTemplate, limitRange, etc. The deviyment is the most common stateless application controller, and supports operations such as expanding and contracting capacity, rolling updating and the like of the application. Servcie provides a fixed access interface for Pod objects that change elastically and have a life cycle, for service discovery and service access. Pod is the minimum unit of run container and schedule. The same Pod may run multiple containers simultaneously, which share NET, UTS, IPC. In addition to USER, PID, MOUNT.
ContainerNetwork Interface, i.e. the API interface of the container network, existing CNIs generally in three modes:
overlay mode: the container is independent of the IP segment of the hosts, which is communicated across the host network by creating tunnels between the hosts, all of the packets of the entire container segment being encapsulated into packets between hosts in the underlying physical network. The benefit of this approach is that it is not dependent on the underlying network.
Routing mode: the hosts and containers belong to different network segments, and the main difference between the hosts and the Overlay mode is that the cross-host communication in the routing mode is opened through routing without making a tunnel packet between different hosts. But routing needs to rely in part on the underlying network, e.g., requiring the underlying network to have a capability of two layers reachable.
Underway mode: the container and the host are located in the same layer network, and both have the same position. The network between containers is open primarily by virtue of the underlying network. This mode is strongly dependent on the underlying capabilities.
Kubelet: daemons (english names: agents) on each machine of the Kubernetes cluster are used to communicate with the container network interface (ContainerNetwork Interface, CNI) plug-ins and create containers.
Ovs: open vSwitch, a switch that exists in a virtual network in software, performs a function similar to that of a physical switch in a traditional network deployment, and can perform local area network partitioning, tunnel construction, and analog routing. The OVS is based on the idea of SDN, the whole core architecture is divided into a control plane and a data plane, the data plane is responsible for data exchange work, and the control plane realizes exchange strategies and guides the data plane to work.
Flow table (flow): ovs performs the core function of data forwarding, defining rules for forwarding data messages between ports. Each flow table rule can be divided into two parts, namely a matching part and an action part, wherein the matching part determines which data are to be processed; the action determines how the matched data message should be processed. Specifically, the ovs virtual switch includes a plurality of flow tables (flow tables), and each flow table includes a plurality of flow table rules (flow rules, i.e., network traffic filtering rules) that specify processing actions, such as forwarding or intercepting, of the data packets. If there is data packet in/out, ovs virtual switch will match the corresponding flow table rule according to the priority of flow table in order, thus the data packet is processed correspondingly, and the purpose of filtering the data packet is achieved.
CRD (customer resource definition): the CRD is a built-in resource type of Kubernetes, i.e. a definition of a custom resource, for describing a user-defined resource. All can be regarded as resources in the Kubernetes, the types of the resource objects are as described in the foregoing, the secondary development capability of the CRD custom resources is added after the Kubernetes 1.7 to expand the Kubernetes API, new resource types can be added into the Kubernetes API through the CRD, and the custom API server is created without modifying the Kubernetes source code, so that the expansion capability of the Kubernetes is greatly improved. When creating a new CustomResourceDefinition (CRD), the Kubernetes API server creates a new RESTful resource path for each version specified, from which some self-defined type resources can be created.
Network stack: including network cards (Network Interface), loop back devices (loopbackdevices), routing tables (Routing tables), and iptables rules. These elements constitute the basic environment for a process to initiate and respond to network requests.
Bridge (Bridge): network equipment functioning as a virtual switch in the Linux system operates at a Data Link layer (Data Link), and has a main function of forwarding Data packets to different ports (ports) of a bridge according to MAC addresses.
ARP (Address Resolution Protocol): and finding the corresponding protocol of the two-layer MAC address through the three-layer IP address.
MAC (Media Access Control) addresses are used to define the location of network devices. The MAC address consists of a 48-bit long, 12-bit 16-ary number, where starting from left to right, 0 to 23 bits are codes that the manufacturer applies to the IETF or the like for identifying the manufacturer, 24 to 47 bits are assigned by the manufacturer itself, and are a unique number for all network cards manufactured by each manufacturer.
Prior art and its problems are further described before proceeding with the description of the embodiments of the present invention:
currently, when most companies build container clouds, an open source scheme Calico BGP or Flannel, cilium is adopted for a container network scheme. However, these solutions have the following drawbacks: (1) Routing forwarding schemes such as Calico BGP require two-layer devices such as switches to learn Mac addresses of Pod by opening BGP (Border Gateway Protocol ) functions, which is risky to retrofit and inefficient to land. (2) Tunnel opening schemes such as Flannel VxLAN, cilium IPIP, typically suffer from 20% -30% performance loss, which is not suitable for security companies in situations where high performance is required. (3) Two layers of VLAN isolation are not supported, and the security of a container network cannot be ensured. (4) The method does not support multi-network card management of a physical machine, does not have high availability guarantee of a hardware level, and is not suitable for building bare metal container clouds. (5) The pooling and pre-allocation of the IP without supporting the service dimension leads to unpredictable container IP and low service online efficiency. (6) The container IP cannot be fixed, and the existing IP-based operation and maintenance and authentication system cannot be used.
There is therefore a need for a container network management scheme that is capable of securing container IP, good availability, and higher performance.
FIG. 1 illustrates a flow chart of a container network management method provided by an embodiment of the present invention, the method being based on a container network plug-in comprising a virtual bridge; the container network plug-in is applied to a host node of the container cluster; at least one container is deployed on the host node; as shown in fig. 1, the method includes:
step 10: acquiring a container creation request; the container creation request includes an IP address to be occupied.
The container creation request may be sent by a user of a management platform corresponding to the container cluster, and is used for creating a container on a certain host node (i.e., a host machine), and it is easy to understand that, in order to implement communication between the container and the outside, an IP address needs to be allocated to the container, where the IP address of the container in the prior art is generally randomly allocated when the container is created, and there is no prediction that the user may request to occupy a specific IP address for the container in advance, that is, the IP of the container cannot be implemented in the prior art. Considering that the IP address is important in the operation and maintenance of the service system, when the application is containerized, when the container corresponding to the service system is operated and managed, actions such as service online and offline, service migration or update and the like may need to be performed according to the IP address, so that the adaptation degree of container management and service management can be improved and the user experience of application containerization can be improved by providing a function of occupying a specific IP address when the container is created.
Step 20: when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; one of the object attributes includes occupancy state information for one of the selectable IP addresses.
The type of the custom type object is custom, an object of a default type similar to a container in a container management platform such as K8s is used for representing container resources, and the type represented by the custom type object in the embodiment of the invention is IP resources, namely, related information of an optional IP address is correspondingly stored in the object attribute of one custom type object.
Specifically, the object attribute corresponding to the custom type stores field information related to the object, for example, when the custom type object is a CRD object in K8s, the object attribute includes attribute field information corresponding to the CRD object, specifically, the object attribute may be a list, and the list stores an optional IP address and occupancy state information of the optional IP address. Wherein, the CRD may be a namespace or a cluster-wide, and deleting the namespace deletes all custom type objects in the namespace as specified in a scope (scope) field of the CRD. Custom resource definition itself has no namespaces, all of which can be used. Specifically, the group field in the crd.spec is used to describe the group name of the corresponding custom type resource, and the value of the group name is a character string; the names field is used for describing the corresponding type, name and the like of the user-defined type resource, and the value of the names field is an object; the scope field is used to define that the corresponding custom resource is that level of resource; the value of this field can only be Cluster or Namespace; the fields of versions are information for specifying the version of the corresponding custom resource and the attribute field of the corresponding type of resource, and the attribute field is a list object. The object attribute in the embodiment of the invention comprises an attribute field of the IP type resource corresponding to the CRD object.
The preset IP resource pool stores a plurality of custom type objects corresponding to a plurality of selectable IP addresses respectively, wherein the selectable IP addresses can be preset and free available IP addresses or can be pre-allocated to the user according to the service requirements of the user. Where pre-allocation refers to the user requesting that a particular number or address of alternative IP addresses be reserved when a container is not created, so that the IP address used to create a new container can be selected from the IP addresses reserved for the user later when the container is created. Considering that when the traditional service system is migrated into the container, in order to facilitate the traditional IP-based operation and maintenance and the follow-up of the authentication system, the problem can be well solved by reserving the IP address of the container, so that the efficiency and the user experience of application containerization are improved.
Specifically, in order to implement the specification and reservation of the IP address of the container based on the prediction of the IP address of the container before the container is created, in the embodiment of the present invention, the occupancy state information includes an allocation and occupancy state and occupancy container information, where the allocation and occupancy state is used to indicate whether the IP address is allocated to the user in advance and occupied by the container, and the occupancy container information is used to indicate the container information, such as a container identifier, a container type, etc., that occupies the IP address when the IP address is occupied by the container. Before step 20, it comprises:
Step 201: acquiring an IP pre-allocation request; the IP pre-allocation request comprises IP information to be allocated and user identification.
The IP pre-allocation request may be sent by a user of the container cluster, and the IP information to be allocated may include an IP address to be allocated, such as addresses 10.0.0.1 and 10.0.0.2. Alternatively, when the user does not specify a specific IP address to be allocated, the IP information to be allocated may also include the number of IP addresses to be allocated, such as 5 optional IP addresses that the user wishes to pre-allocate. The user identifier is used for specifically identifying the user, so that an IP address which is required to be allocated for the specific user is reserved for the specific user, and an IP resource pool corresponding to the user is obtained.
It should be noted that, in order to avoid redundancy of data storage and improve maintenance efficiency of the custom type object, the user identifier and the IP resource pool information corresponding to the user identifier may be stored in a preset database (such as etcd), and the object attribute of the custom type object only needs to store the occupied state information of the IP address resource corresponding to the object, without storing again the user reserved for the IP address corresponding to the object. The IP resource pool information may be represented in a list, where all optional IP addresses pre-assigned to the user are stored.
Step 202: creating the IP resource pool corresponding to the user identifier according to the IP information to be allocated; wherein in the IP resource pool, the allocation and occupancy status is marked as allocated unoccupied, and the occupancy container information is marked as empty.
The method comprises the steps of determining IP addresses to be allocated according to IP information to be allocated, and creating a custom type object for each IP address to be allocated, wherein the IP addresses to be allocated, allocation and occupation states of the IP addresses to be allocated and container occupation states are stored in object attributes of the custom type object.
It is easy to understand that when a user has not created any container, the reserved IP address is in a state of being allocated but not occupied by any container, and the allocation and occupation state of the IP address to be allocated is marked as being allocated unoccupied, so that the IP address is characterized as being pre-allocated, and therefore the IP address is not reassigned to the container corresponding to other users.
Correspondingly, after the reserved IP address is used to create the container, that is, after the user occupies the reserved IP address, considering that events such as deletion, recovery, etc. may exist in the life cycle of the container, in order to implement decoupling of the container management process and the IP address management process while the container fixes the IP, implementing stateful IP address management in the full life cycle, further performing real-time update maintenance on the object attribute according to the occupation condition of the container on the IP address, that is, after step 20, the method further includes:
Step 203: and marking the allocation and occupation state corresponding to the IP address to be occupied as allocated occupied, and marking the occupied container information corresponding to the IP address to be occupied as the container identifier of the target container.
Wherein, when a user creates a container using an IP address in his IP resource pool, the state of the IP address is allocated and occupied. The object attribute of the custom type object corresponding to the IP address to be occupied is marked as occupied, the container information bound with the IP address is characterized, and therefore, when the subsequent container is rebuilt, the IP address can be searched according to the container information, and the immobilization of the container IP is realized.
Meanwhile, the distribution and occupation state and the occupation container information are stored in the object attribute, and the object attribute is updated and maintained in real time according to the occupation condition of the IP address, so that the accuracy of container IP management can be improved, the life cycle of container management and the life cycle of IP address management are decoupled, and the method is different from the prior art that the container and the IP address corresponding to the container are stored in a key value pair mode, only the uniqueness of the IP of the container can be realized, when the key value pair is deleted, the container and the IP address information are simultaneously deleted, and therefore, the IP address used before can not be recovered when the container is restored and rebuilt after the container is deleted, namely the immobilization and the binding of the whole life cycle of the IP address of the container can not be realized.
Step 30: monitoring network resource change information of the container cluster; the network resource change information includes IP address change information of each container.
Wherein the network resource variation information is used to characterize variations of various types of network resources in the container cluster, wherein the types of network resources may include containers, IP addresses, services, and the like. The change in network resources may be due to user operations in the cluster of containers, such as the user creating or deleting a container in the cluster, creating a K8s service, etc. Or may be generated by a change in the device level in the container cluster, such as when a host node is down, resulting in a container on the host node being offline, etc. It will be readily appreciated that in order to implement the communication function of a container after the creation of a container of fixed and stateful IP addresses is completed, it is also necessary to network configure the container cluster. The method monitors the dynamic change of the resources of the container cluster in real time, so that the IP resource pool is updated and maintained according to the monitoring result, and the accuracy of IP reservation and distribution is realized. Thus, after step 30, it also comprises:
step 301: updating the IP resource pool in real time according to the network resource change information; and when the target container is determined to be deleted, marking the allocation and occupation state of the optional IP address corresponding to the target container as unused in the IP resource pool, and marking the occupation container information of the optional IP address corresponding to the target container as empty.
Wherein, according to the container in the network resource change information and the change information of the IP address, the IP resource pool is updated in real time: when the deletion of the container is detected, the allocation and occupation state of the IP address corresponding to the deleted container are marked as unused, and the corresponding occupation container information is marked as empty.
Correspondingly, considering that the container may recover the reconstruction after the deletion, the existing method cannot achieve that the recovered container still uses the IP address occupied before the deletion, and the IP address may be associated with the service system, so in order to achieve the immobilization of the full life cycle of the IP address of the container, after step 301, the method may further include:
step 302: and when the target container is determined to be rebuilt after deletion, marking the allocation and occupation state of the optional IP address corresponding to the target container as allocated occupied, and marking the occupied container information as a container identifier of the target container.
In still another embodiment of the present invention, a flowchart of IP address management corresponding to creation, deletion, and reconstruction of container services for a user of a container cloud platform may refer to fig. 2.
As shown in fig. 2, first, before a user creates a container service, the user needs to apply for a required IP address field on the container cloud platform, the container cloud platform creates an IP resource pool for the container cloud platform, and reserves required IP resources for the service, where the IP resource pool is marked as an allocated and unused state, so that the IP address field is not occupied by other services.
When a user formally creates a container service, the container cloud platform transmits the created IP resource pool to a container network plug-in based on the embodiment of the invention, the container network plug-in uses the IP to create a container, and marks the used IP as an in-use state. If the elastic capacity expansion scene appears later, the remaining 'allocated and unused' IP addresses are searched in the IP resource pool corresponding to the user for use.
When a user deletes a service container, the container network plug-in will release the IP used by the container back to the original IP pool, mark as an allocated and unused state, and record the original container through the field in the custom resource object of Kubernetes.
When the subsequent container service is rebuilt, the corresponding IP resource pool is used to check whether the field in the custom resource object corresponding to the IP resource contains the information of the belonging container, and if the information is contained, the IP is restored according to the information.
The container network plug-in of the embodiment of the invention realizes the IP pool of the service based on the custom resource object (CRD) of the Kubernetes, abstracts a section of IP resource with custom length into a custom object, which is called an IP address pool. One or more IP address fields are supported to be allocated to the service in advance, so that the IP before the creation of the container is known, the firewall policy is convenient to be created or modified in time by operation and maintenance personnel, meanwhile, the IP resources are reasonably abstract, the IP can be reserved after the service is destroyed, and the fixed IP is unchanged after the service is rebuilt. Because the IP address can be kept unchanged, the transformation difficulty of the service from the virtual machine to the container is greatly reduced, and the monitoring, journaling and authentication system based on the IP address, which is established by companies, can be continuously used on the container. In the embodiment of the invention, the maintenance of the object attribute of the custom type object corresponding to the IP address realizes the mutual decoupling of the management process of the IP address and the management process of the container while realizing the fixation of the IP address and the container, and further improves the usability of the container network.
Further, the container network may be dynamically configured according to the monitoring result, and after step 30, the method further includes:
step 40: respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards traffic for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the containers.
The network stack protocol is stored on the host node and is used for defining the forwarding rule of the host node for the traffic transmitted to the host node. Traffic on the host node may be generated by a container on the host node, which needs to be outgoing, or may be sent to the host node by an external source, such as another container, a management component, etc. When forwarding the traffic, the host node needs to determine routing information corresponding to the traffic destination on one hand, so as to determine the address of the next station, and may also need to process the data packet to be forwarded, such as address conversion, discarding the data, and the like on the other hand.
Meanwhile, in the embodiment of the invention, in order to improve the performance of the container network, shorten the external access path to the container under the preset scene, introduce the virtual bridge to forward the flow for the container, and realize VLAN isolation between the containers and cross VLAN access under the preset condition by configuring the flow table rule of the virtual bridge. Further, in order to improve the security of the container network and the adaptation degree of the container network to the service system management, a plurality of virtual bridges may be set on a host node, where a virtual bridge corresponds to a container class with a function, and the functions of the container class may include production, management, storage, and so on. One virtual bridge corresponds to at least one VLAN, and one VLAN includes at least one container therein.
The flow forwarding is completed in the virtual network bridge between the host and the container of the VLAN, and the forwarding of devices such as a two-layer switch is not needed, thereby improving the performance of the container network. When the VLAN is accessed, the flow table rule on the virtual network bridge needs to be configured, so that the virtual network bridge forwards the flow in the container to the two-layer switching equipment and then sends the flow to the container of the corresponding VLAN through the two-layer switching equipment. In order to realize data transmission between the virtual network bridge and the two-layer (data link layer) switch, the virtual network bridge is further provided with a virtual network card, the virtual network card can be regarded as a port of the virtual network bridge, and the virtual network card connects the virtual network bridge and the two-layer switch device. Further, in order to improve the reliability of the physical layer of the container network, high availability of the container network is realized, and a plurality of network cards can be configured on one virtual network bridge, so that binding of the network cards and the network bridge is realized.
After realizing the prediction, reservation, fixation and decoupling of the container IP management and the IP management, the embodiment of the invention further configures the container network so as to complete the construction of the container network. Specifically, the embodiment of the invention realizes a container network capable of cross-VLAN communication in an underlay mode. Specifically, at least one virtual bridge is deployed on one of the host nodes; at least one virtual network card is mounted on one virtual network bridge; the at least one container corresponds to at least one VLAN; the network stack protocol comprises a routing table rule and an iptables rule; the routing table rule is used for representing a first routing relation among VLANs where the IP addresses of the containers are located; the iptables rule is used for representing the conversion relation between the IP address of each container and the network card address of the virtual network card; the flow table rule is used for representing a second routing relation between the network card address and the VLAN.
It is first considered that in application containerization, different containers may correspond to different service systems, for example, container 1 may be used for a single sign-on system, and container 2 may be used for database management, so there is a need for VLAN mutual isolation between containers in a container network, and at the same time, there may be a need for inter-access of containers across VLANs due to a need for management of container clusters, an interaction need of service systems, and the like. Further, in addition to inter-access between containers, there may be external devices outside the container cluster, such as management components of the container management platform like kubelet that require access to the containers,
Therefore, when the flow table rule and the network stack protocol are configured, the first routing relationship includes the path relationship between VLANs corresponding to each container, wherein the connection path with the VLAN is transferred in the host through the virtual bridge, the connection path crossing the VLAN needs to reach the virtual network card through the virtual bridge, connect the two-layer device through the virtual network card, and then send to the gateway where the corresponding VLAN is located through the two-layer device. In view of the incompatibility between the network card address of the virtual network card and the IP address of the container, in order to achieve smooth outgoing of the container traffic, an address translation relationship (i.e., iptables rule) between the IP address and the network card address of the virtual network card needs to be configured on the host node. Correspondingly, in order to forward and process the flow for the container on the host through the virtual network card on the virtual network bridge, the second routing relationship includes the path relationship between the network card address of the virtual network card and each VLAN. It is easy to understand that the path between the virtual network card and the VLAN passes through the two-layer switching device, and possibly the three-layer route, and these information are recorded in the second routing relationship.
Thus, in particular, the first routing relationship comprises sending a data packet from the IP address of the sender to the virtual bridge to forward the data packet through the virtual bridge to the IP address of the receiver when the IP addresses of the sender and the receiver are within the same VLAN; the second routing relationship includes sending the data packet from the IP address of the sender to the virtual network card when the IP addresses of the sender and the receiver are located in different VLANs, so as to send the data packet to the IP address of the receiver through the virtual network card.
The following describes the procedure of forwarding traffic by the container network plug-in according to the configured network stack protocol and the rule of the flow table, respectively, with reference to fig. 3-5.
Fig. 3 shows a schematic of traffic for two containers of the same VLAN as a node. As shown in fig. 3, when communicating with a VLAN, the packet delivery implementation procedure includes:
(1) The other containers of the same VLAN accessed by the container of the sender can be matched with a routing table stored on a host computer where the container is located, and a first routing relation in the routing table prescribes that the data packet accessed by the same VLAN is sent out by a network card in the container to directly reach the container of the receiver through two layers.
(2) The sender container sends an ARP request, and after the ARP request is forwarded by the virtual bridge, the ARP request reaches the server container, and the server container responds to the ARP request (in this process, the virtual bridge completes MAC address learning).
(3) And the sender container finishes sending packets after receiving the ARP response, and the data packets are forwarded to the server-side container through the OVS virtual network bridge.
Fig. 4 shows a communication traffic schematic of two containers across a VLAN. As shown in fig. 4, by default, the cross-VLAN container network is not enabled, and if necessary, the cross-VLAN communication can be supported by a three-layer routing policy, and the packet sending implementation process includes:
(1) The sender container cannot match the first routing relationship of the routing table, matches the default rule, and needs to be forwarded through the default gateway (VLAN-1 gateway).
(2) The sender container sends an ARP request to a default gateway that is located on the three-tier router, responding to the ARP request. It is readily understood that both the virtual bridge and the two-layer switch complete the MAC address learning during this process.
(3) After the data packet arrives at the VLAN-1 gateway, the data packet is forwarded to the VLAN-2 gateway through a forwarding strategy configured on the three-layer router (the default three-layer router is not configured with the forwarding strategy), and then reaches the server container through the first routing relation.
Further, considering that the container management component often needs to perform mutual access on the container, so as to obtain the state of the container, etc., the container network plug-in based on which the embodiment of the present invention is based also supports the management component to access the container of any VLAN, fig. 5 shows a flow schematic diagram of the management component accessing the container in the embodiment of the present invention, and as shown in fig. 5, the packet issuing implementation process is as follows:
(1) The data packet sent by the management component kubelet firstly passes through the network stack of the host node, and the access VLAN-2 is obtained according to the matching of the routing table and needs to pass through the virtual network card.
(2) And according to the POSTROUTING rule of the iptables, SNAT is needed once, and the source IP of the data packet is changed into the IP address of the virtual network card device.
(3) The packet arrives at the virtual network card, which is a port of the virtual bridge from which the packet is equivalent to entering the virtual bridge.
(4) Based on mac address learning, the OVS will send out packets from the correct port to the destination container.
The implementation process of the packet returning is as follows:
(1) Firstly, entering an OVS switch through a default network card of a container, finding out a virtual network card port to send according to the rule configuration of an OVS flow table, and completing SNAT.
(2) After entering a host machine network stack, changing a destination address into a node IP according to a connrack table DNAT, finding a process matched with kubelet through a port number, and finally receiving the data packet by kubelet.
In summary, in the embodiment of the invention, the cross-host communication traffic is forwarded through the flow table of the virtual network bridge without tunnel encapsulation and is directly imported into the physical network, and the two-layer switch is responsible for learning and forwarding the traffic. Through performance pressure measurement, the performance loss is only about 5% compared with that of a physical machine. According to the embodiment of the invention, an agent is deployed at each node of the container cloud cluster, and the agent is responsible for issuing the flow table rule corresponding to the agent when the container is created. The container network plug-in the embodiment of the invention realizes the isolation capability based on VLAN by using the OVS virtual network bridge, the container in the same VLAN can be directly accessed to each other in two layers, and the two layers are mutually isolated from each other in different VLAN, if necessary, the mutual access can be realized by configuring three-layer routing strategies, thereby being convenient for ensuring the security of the container network when the service is mixed and deployed.
The data plane architecture of the container network plug-in of an embodiment of the present invention may be as described in fig. 6. The virtual network bridge comprises a production network virtual network bridge and a management network virtual network bridge; the production network virtual network bridge is provided with a plurality of production network virtual network cards; the management network virtual network bridge is provided with a plurality of management network virtual network cards; the first VLAN where the container connected with the production network virtual bridge is located and the second VLAN where the container connected with the management network virtual bridge is located are isolated from each other.
As shown in fig. 6, each physical node includes two virtual bridges, one is a production virtual bridge and one is a management virtual bridge. In order to realize high availability of the container network, each node is provided with 2 production network cards and 2 management network cards, wherein the network cards are bound into bond0 and bond1 in pairs, are respectively mounted on an OVS virtual network bridge and are connected to a two-layer switch. Containers of different VLANs can be mounted to either the production network virtual bridge or the management network virtual bridge, respectively, as desired. It should be noted that, the number of virtual bridges in the embodiment of the present invention may be increased as needed, for example, virtual bridges corresponding to the data storage network may also be established.
The container network plug-in of the embodiment of the invention creates two virtual bridges on each physical machine node, one is a production bridge and one is a management bridge. The management component of the container cluster is hung on a management network bridge to provide public service for the access of business containers of different VLANs; the traffic containers are suspended from the production bridges in different VLANs. The container network plug-in is based on the bare metal container cloud scene design, and supports a plurality of network cards of the physical node of the nano tube. Two production network cards and two management network cards are used by default and bound in pairs, and virtual network cards generated by binding are respectively hung on the production OVS and the management OVS, so that the high availability of the container network is supported from the bottom layer of hardware.
In summary, the beneficial effects of the embodiment of the invention at least include: excellent container network performance is provided, and container traffic is managed and distributed using OVS flow table rules without requiring tunnel encapsulation. Through performance pressure measurement, the performance loss is only 5% compared with that of a physical machine. The VLAN two-layer isolation capability is provided based on the OVS bridge, the VLAN is communicated with the default VLAN, the VLAN is isolated, the configurable VLAN access capability is supported, and the network security capability of the container is improved. And the physical node multi-network card management is supported, and the network card binding is supported to realize the high availability of the container bottom network. The method supports IP pool management of service dimension, pre-distributes IP address pool for a certain service component, and supports elastic capacity expansion function of the IP address pool. And firewall policy creation or modification is performed on the IP pool by operation and maintenance personnel in advance, so that operation and maintenance efficiency is improved. The IP address of the support container is fixed, so that the stateful service is convenient to migrate to the container, and meanwhile, established IP-based monitoring, log and authentication systems of the company can be used.
Fig. 7 shows a schematic structural diagram of a container network plug-in provided by an embodiment of the present invention. The container network plug-in is applied to a host node of the container cluster; the host node has at least one container deployed thereon, as shown in fig. 7, the container network plug-in 500 includes: network configuration module 501 and virtual bridge 502.
A network configuration module 501 for acquiring a container creation request; the container creation request comprises an IP address to be occupied;
the network configuration module 501 is further configured to create a target container corresponding to the IP address to be occupied in the container cluster when determining that the IP address to be occupied is not occupied according to a preset IP resource pool; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address;
the network configuration module 501 is further configured to monitor network resource change information of the container cluster; the network resource change information comprises IP address change information of each container;
the network configuration module 501 is further configured to configure a network stack protocol of the host node and a flow table rule of the virtual bridge according to the network resource change information; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the at least containers;
the virtual bridge 502 is configured to forward traffic for the container according to the flow table rule.
Fig. 8 shows a schematic diagram of the network control plane of a container network plug-in yet another embodiment of the invention. As depicted in fig. 8, the network configuration module 501 may in turn include a network proxy and a container network plug-in, wherein,
network proxy: and the resident process on the node is used for monitoring the change of the network resource object in the container cluster and modifying the routing table, the iptables rule and the virtual bridge flow table rule on the physical node.
Container network plug-in: the binary files stored under the fixed directory on the node are called by kubelet when creating and deleting containers, and are used for configuring the network of a certain container.
Virtual bridges: for carrying two layers of forwarding traffic between containers.
The functionality of the container network plug-in an embodiment of the invention is illustrated in connection with fig. 8:
step one: the user invokes the Kubernetes Api-server interface (i.e., the interface server in fig. 8) to create a Kubernetes Service.
Step two: the Agent (i.e. the network Agent in fig. 8) listens for this action, and modifies the iptables rule, routing table and virtual bridge OVS flow table rule of the node in real time.
Step three: the user creates a container and the kubelet component invokes a container network plug-in to build a container network stack.
Step four: the container accesses Kubernetes Service smoothly according to the network stack structure built by the container network plug-in, and the guidelines of the iptables rule, the routing rule and the OVS flow table rule.
The implementation process of the container network plug-in the embodiment of the present invention is substantially the same as that in the foregoing method embodiment, and will not be repeated.
The container network plug-in provided by the embodiment of the invention acquires the container creation request; the container creation request comprises an IP address to be occupied; when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; one of the object attributes includes occupancy state information for one of the selectable IP addresses. Thus, the maintenance of object attributes of custom class type objects representing IP resources can be used to implement the management of stateful fixed container IP. Monitoring the network resource change information of the container cluster in real time after the container is created; the network resource change information includes IP address change information of each container. Finally, respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards the flow for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize the routing relationship between the virtual bridge and the IP address of the container, thereby enabling cross-host, cross-container access. The embodiment of the invention can realize the binding of the container and the fixed IP on one hand, is convenient for the operation and maintenance of the container, and on the other hand, can update the network stack protocol of the host node and the flow table rule of the virtual network bridge in real time according to the change of the network resource of the container cluster, thereby realizing the flow forwarding of the corresponding route for the container by the virtual network bridge according to the flow table rule and realizing the network configuration of the container cluster across hosts and containers.
Fig. 9 is a schematic structural diagram of a container network management device according to an embodiment of the present invention, which is not limited to the specific implementation of the container network management device according to the embodiment of the present invention.
As shown in fig. 9, the container network management apparatus may include: a processor 602, a communication interface (Communications Interface), a memory 606, and a communication bus 608.
Wherein: processor 602, communication interface 604, and memory 606 perform communication with each other via communication bus 608. Communication interface 604 is used to communicate with network elements of other devices, such as clients or other servers. The processor 602 is configured to execute the program 610, and may specifically perform relevant steps in the embodiment of the container network management method used in the foregoing description.
In particular, program 610 may include program code comprising computer-executable instructions.
The processor 602 may be a central processing unit CPU or a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors comprised by the container network management device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
A memory 606 for storing a program 610. The memory 606 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 610 may be specifically invoked by the processor 602 to cause the container network management device to:
acquiring a container creation request; the container creation request comprises an IP address to be occupied;
when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address;
monitoring network resource change information of the container cluster; the network resource change information comprises IP address change information of each container;
respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards traffic for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the containers.
The implementation process of the container network device provided in the embodiment of the present invention is substantially the same as that of the foregoing method embodiment, and will not be repeated.
The container network equipment provided by the embodiment of the invention acquires the container creation request; the container creation request comprises an IP address to be occupied; when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; one of the object attributes includes occupancy state information for one of the selectable IP addresses. Thus, the maintenance of object attributes of custom class type objects representing IP resources can be used to implement the management of stateful fixed container IP. Monitoring the network resource change information of the container cluster in real time after the container is created; the network resource change information includes IP address change information of each container. Finally, respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards the flow for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize the routing relationship between the virtual bridge and the IP address of the container, thereby enabling cross-host, cross-container access. The embodiment of the invention can realize the binding of the container and the fixed IP on one hand, is convenient for the operation and maintenance of the container, and on the other hand, can update the network stack protocol of the host node and the flow table rule of the virtual network bridge in real time according to the change of the network resource of the container cluster, thereby realizing the flow forwarding of the corresponding route for the container by the virtual network bridge according to the flow table rule and realizing the network configuration of the container cluster across hosts and containers.
An embodiment of the present invention provides a computer readable storage medium storing at least one executable instruction that, when executed on a container network management device, causes the container network management device to perform the container network management method in any of the method embodiments described above.
The executable instructions may be specifically operable to cause the container network management device to:
acquiring a container creation request; the container creation request comprises an IP address to be occupied;
when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address;
monitoring network resource change information of the container cluster; the network resource change information comprises IP address change information of each container;
respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards traffic for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the containers.
The execution process of the executable instructions stored in the computer storage medium provided by the embodiment of the present invention is substantially the same as that of the foregoing method embodiment, and will not be repeated.
The executable instructions stored in the computer storage medium provided by the embodiment of the invention are created by acquiring a container creation request; the container creation request comprises an IP address to be occupied; when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; one of the object attributes includes occupancy state information for one of the selectable IP addresses. Thus, the maintenance of object attributes of custom class type objects representing IP resources can be used to implement the management of stateful fixed container IP. Monitoring the network resource change information of the container cluster in real time after the container is created; the network resource change information includes IP address change information of each container. Finally, respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards the flow for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize the routing relationship between the virtual bridge and the IP address of the container, thereby enabling cross-host, cross-container access. The embodiment of the invention can realize the binding of the container and the fixed IP on one hand, is convenient for the operation and maintenance of the container, and on the other hand, can update the network stack protocol of the host node and the flow table rule of the virtual network bridge in real time according to the change of the network resource of the container cluster, thereby realizing the flow forwarding of the corresponding route for the container by the virtual network bridge according to the flow table rule and realizing the network configuration of the container cluster across hosts and containers.
The embodiment of the invention provides a container network management device for executing the container network management method.
Embodiments of the present invention provide a computer program that can be invoked by a processor to cause a container network management device to perform the container network management method of any of the method embodiments described above.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when run on a computer, cause the computer to perform the container network management method of any of the method embodiments described above.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component, and they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (10)

1. A container network management method, characterized in that the method is based on a container network plug-in comprising a virtual bridge; the container network plug-in is applied to a host node of the container cluster; at least one container is deployed on the host node; the method comprises the following steps:
Acquiring a container creation request; the container creation request comprises an IP address to be occupied;
when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, a target container corresponding to the IP address to be occupied is created in the container cluster; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address;
monitoring network resource change information of the container cluster; the network resource change information comprises IP address change information of each container;
respectively configuring a network stack protocol of the host node and a flow table rule of the virtual network bridge according to the network resource change information so that the virtual network bridge forwards traffic for the container; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the containers.
2. The method of claim 1, wherein one of said host nodes has at least one of said virtual bridges deployed thereon; at least one virtual network card is mounted on one virtual network bridge; the at least one container corresponds to at least one VLAN; the network stack protocol comprises a routing table rule and an iptables rule; the routing table rule is used for representing a first routing relation among VLANs where the IP addresses of the containers are located; the iptables rule is used for representing the conversion relation between the IP address of each container and the network card address of the virtual network card; the flow table rule is used for representing a second routing relation between the network card address and the VLAN.
3. The method of claim 2, wherein the first routing relationship comprises sending a data packet from the IP address of the sender to the virtual bridge to forward the data packet to the IP address of the receiver through the virtual bridge when the IP addresses of the sender and receiver are within the same VLAN; the second routing relationship includes sending the data packet from the IP address of the sender to the virtual network card when the IP addresses of the sender and the receiver are located in different VLANs, so as to send the data packet to the IP address of the receiver through the virtual network card.
4. The method of claim 2, wherein the virtual bridge comprises a production network virtual bridge and a management network virtual bridge; the production network virtual network bridge is provided with a plurality of production network virtual network cards; the management network virtual network bridge is provided with a plurality of management network virtual network cards; the first VLAN where the container connected with the production network virtual bridge is located and the second VLAN where the container connected with the management network virtual bridge is located are isolated from each other.
5. The method of claim 1, wherein the occupancy state information comprises allocation and occupancy state and occupancy container information; when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool, before a target container corresponding to the IP address to be occupied is created in the container cluster, the method comprises the following steps:
Acquiring an IP pre-allocation request; the IP pre-allocation request comprises IP information to be allocated and a user identifier;
creating the IP resource pool corresponding to the user identifier according to the IP information to be allocated; wherein, in the IP resource pool, the allocation and occupation state is marked as allocated unoccupied, and the occupation container information is marked as empty;
after the target container corresponding to the IP address to be occupied is created in the container cluster, the method comprises the following steps:
and marking the allocation and occupation state corresponding to the IP address to be occupied as allocated occupied, and marking the occupied container information corresponding to the IP address to be occupied as the container identifier of the target container.
6. The method of claim 5, wherein after said listening for network resource change information for said container cluster, comprising:
updating the IP resource pool in real time according to the network resource change information; and when the target container is determined to be deleted, marking the allocation and occupation state of the optional IP address corresponding to the target container as unused in the IP resource pool, and marking the occupation container information of the optional IP address corresponding to the target container as empty.
7. The method of claim 5, wherein after creating the target container corresponding to the IP address to be occupied in the container cluster, further comprises:
and when the target container is determined to be rebuilt after deletion, marking the allocation and occupation state of the optional IP address corresponding to the target container as allocated occupied, and marking the occupied container information as a container identifier of the target container.
8. A container network plug-in, wherein the container network plug-in comprises a virtual bridge; the container network plug-in is applied to a host node of the container cluster; at least one container is deployed on the host node; the container network plug-in includes:
the network configuration module is used for acquiring a container creation request; the container creation request comprises an IP address to be occupied;
the network configuration module is further configured to create a target container corresponding to the IP address to be occupied in the container cluster when the IP address to be occupied is determined to be unoccupied according to a preset IP resource pool; the IP resource pool comprises object attributes of a plurality of custom type objects; the object attribute comprises occupancy state information of an optional IP address;
The network configuration module is further used for monitoring the network resource change information of the container cluster; the network resource change information comprises IP address change information of each container;
the network configuration module is further configured to configure a network stack protocol of the host node and a flow table rule of the virtual bridge according to the network resource change information; wherein the network stack protocol is used for characterizing a routing relationship between IP addresses of the at least one container; the flow table rules are used to characterize routing relationships between the virtual bridges and the IP addresses of the at least containers;
the virtual network bridge is configured to forward traffic for the container according to the flow table rule.
9. A container network management device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the container network management method of any one of claims 1-7.
10. A computer readable storage medium, wherein at least one executable instruction is stored in the storage medium, which when run on a container network management device causes the container network management device to perform the operations of the container network management method according to any one of claims 1-7.
CN202310078084.XA 2023-01-13 2023-01-13 Container network management method, container network plug-in and related equipment Pending CN116132542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310078084.XA CN116132542A (en) 2023-01-13 2023-01-13 Container network management method, container network plug-in and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310078084.XA CN116132542A (en) 2023-01-13 2023-01-13 Container network management method, container network plug-in and related equipment

Publications (1)

Publication Number Publication Date
CN116132542A true CN116132542A (en) 2023-05-16

Family

ID=86302519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310078084.XA Pending CN116132542A (en) 2023-01-13 2023-01-13 Container network management method, container network plug-in and related equipment

Country Status (1)

Country Link
CN (1) CN116132542A (en)

Similar Documents

Publication Publication Date Title
US11509577B2 (en) Linking resource instances to virtual network in provider network environments
CN111885075B (en) Container communication method, device, network equipment and storage medium
US11658936B2 (en) Resizing virtual private networks in provider network environments
CN114095307B (en) Coordinating inter-region operations in a provider network environment
CN108347493B (en) Hybrid cloud management method and device and computing equipment
CN110012125B (en) Cluster network communication method, device, storage medium and equipment
US20170353394A1 (en) Resource placement templates for virtual networks
JP6087922B2 (en) Communication control method and gateway
CN109451084A (en) A kind of service access method and device
JP5358693B2 (en) Providing logical networking capabilities for managed computer networks
CN109716717A (en) From software-defined network controller management virtual port channel switching equipment peer-to-peer
CN107947961A (en) Kubernetes Network Management System and method based on SDN
US7499451B2 (en) Computer node, cluster system, cluster managing method, and cluster managing program
CN114237812A (en) Container network management system
US10237235B1 (en) System for network address translation
US8041761B1 (en) Virtual filer and IP space based IT configuration transitioning framework
JP3609948B2 (en) Multiprotocol network management method, multiprotocol network management proxy server system, multiprotocol address management server system, and multiprotocol network management system
JP4021780B2 (en) Computer node, cluster system, cluster management method, cluster management program
CN110062057A (en) The proxy gateway and communication means of message are handled for hot-backup system
CN116132542A (en) Container network management method, container network plug-in and related equipment
CN117201574A (en) Communication method between VPCs (virtual private networks) based on public cloud and related products
US10931565B2 (en) Multi-VRF and multi-service insertion on edge gateway virtual machines
CN111935336A (en) IPv 6-based network management method and system
KR102481623B1 (en) address management method and system for applications in LISP-based distributed container virtualization environment
KR102567139B1 (en) Management apparatus for edge platform, and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination