CN115334018A - Openstack-based container control method and device for IaaS cloud architecture and container - Google Patents

Openstack-based container control method and device for IaaS cloud architecture and container Download PDF

Info

Publication number
CN115334018A
CN115334018A CN202210970313.4A CN202210970313A CN115334018A CN 115334018 A CN115334018 A CN 115334018A CN 202210970313 A CN202210970313 A CN 202210970313A CN 115334018 A CN115334018 A CN 115334018A
Authority
CN
China
Prior art keywords
pod
service
network
network card
tbway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210970313.4A
Other languages
Chinese (zh)
Inventor
李垚峰
谭友章
邱鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Insurance Technology Co Ltd
Original Assignee
Pacific Insurance Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Insurance Technology Co Ltd filed Critical Pacific Insurance Technology Co Ltd
Priority to CN202210970313.4A priority Critical patent/CN115334018A/en
Publication of CN115334018A publication Critical patent/CN115334018A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/76Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/78Architectures of resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services

Abstract

The invention provides a container control method of an IaaS cloud architecture based on Openstack, which comprises the following steps: a. receiving request information for creating a POD; b. creating a sandbox based on the request information; the TBway plug-in calls the TBway service to apply for network resources; the TBwayd service determines parameters corresponding to the network resources; the TBwayd applies to an OpenStack infrastructure platform to allocate network resources for the POD, and obtains corresponding network resources; and f, the TBway plug-in sets the network resource in a network name space of the POD through the iplink. The invention also provides a container of the IaaS cloud architecture based on the Openstack. By the method and the device, an effective mode for controlling network resource management in the IaaS cloud architecture is realized.

Description

Openstack-based container control method and device for IaaS cloud architecture and container
Technical Field
The present invention relates to a network management system, and in particular, to a container control method and a corresponding container for an Infrastructure as a Service (IaaS) cloud architecture based on an Openstack.
Background
In recent years, more and more companies are actively embracing cloud computing, and businesses traditionally operated by the companies are continuously deployed on cloud infrastructure in a containerized manner. With the expansion of companies and businesses, various IT applications become more and more complex, and the number of containers also becomes more and more, so that an orchestration engine kubernets (K8 s for short) for managing operation and maintenance containers comes out. K8s is an orchestration management tool for a portable container created for container services. This is a major breakthrough and innovation in the development of the whole container technology field, but the container network is still one of the more complex and painful aspects of K8 s.
The complexity of the service and the network need to consider the aspect of meeting the service, which usually involve various complex scenarios such as inter-call between container cluster internal services, inter-call between services of different nodes, inter-call between cluster internal services and cluster external services, etc., and because of some traditional service needs, it is often necessary that the network of the container application and the network of the virtual machine infrastructure are on the same layer, and there is no difference at least in the network between using the virtual machine and using the container from the application point of view. In addition, some conventional applications also require that the IP be kept unchanged after the container destruction reconstruction, and also that the container be created according to the specified IP when deployed onto the container. It is difficult for a general container network scheme to satisfy all of these requirements.
In terms of networks, different enterprises and users may use different network solutions to meet specific environmental and business requirements for various reasons, and a perfect and universal solution does not exist at present. Thus, there are many different network schemes, whether open-source or commercial, in the industry that vary in their interface and use. Currently, an open source K8s has numerous Container Network Interface (CNI) plug-ins, called CNI plug-ins for short, and the main uses are:
flannel the most mature and simple choice but only satisfies the basic function
Calico has good performance and strongest flexibility, and depends on Border Gateway Protocol (BGP) in the current enterprise-level mainstream.
Canal, the network layer provided by Flannel is integrated with the network policy functions of Calico.
Weaves, the unique function is to simply encrypt the entire network, which increases the network overhead.
Terway is CNI plug-in based on Private Network (VPC) of an Aricloud open source, and supports VPC and Elastic Network Interface (ENI) mode
These inserts are different in their characteristics and in their use. According to our current business scenario, there is a definite need for a network solution that can implement the following functions:
has the basic CNI plug-in function, and can add network to the container, delete network, etc
Can support container reconstruction and IP is fixed
Can support the creation of containers within a specified IP range
The network of our infrastructure cloud platform can be used directly, and the container and the virtual machine are in the same layer in use.
In particular, different enterprises and users have different service scenes, and the capabilities provided by the network plug-ins required by the K8s layer are different, so that at present, no universal solution is available to adapt to all different use scenes. Various network plug-ins on the market also have their own characteristics, and are suitable for some scenes, but the requirements are often not met in other scenes. For example, some CNI container network plug-ins that are also common to enterprises are now popular:
1、Flannel
flannel realizes communication between PODs (which are the minimum unit level managed by K8 s), and forwards a source packet by encapsulating the source packet in another network packet through an overlay network (overlay network), where the overlay network can allocate an independent ip address to each POD.
1.1, characteristics of use
When the Flannel is used, a large subnet is preset, then a section of small subnet in the large subnet is allocated to each working node (node) for internally allocating an IP address, the information is stored in the etcd by the Flannel, and the corresponding relation of each subnet bound to the working node is recorded, so that secondary data packet transmission is facilitated next time. And the flannel can start a daemon process on the working node, and the daemon process mainly maintains the setting of the local routing rule and the information in the etcd. Flannel is relatively easy to install and configure, and many common kubernets cluster deployment tools and many kubernets releases all default to installing Flannel. In general, flannel is a good choice for most users.
1.2, disadvantages
The function is simple, only the basic network function is satisfied, and fine-grained network access control is not supported. Often, a user has a configuration requirement on a network policy or a Control Access Control List (ACL) in a multi-tenant network, and the Flannel does not support the ACL.
Based on Linux TUN/TAP, a User Datagram Protocol (UDP) and a Virtual eXtensible Local Area Network (VXLAN) are used to encapsulate an IP packet to create an overlay Network to achieve communication between PODs on different working nodes, which involves the encapsulation and decapsulation processes of a data packet, and the performance loss is large.
Even if the host-gw mode can directly add a route, the destination host is used as a gateway, additional packets are not needed to be unpacked, the communication efficiency is almost the same as that of the bare-computer direct connection, but as the flannel can only modify the routing table on the nodes, once other routing devices such as a three-layer router exist between the nodes, the packet is discarded by the routing devices. Thus, the host-gw mode can only be used for a two-layer directly reachable network, and has a high requirement on underlying network deployment. In addition, because of network storm problems, such networks are typically relatively small in size, and host-gw can also be problematic when there are a large number of PODs.
IP creation POD cannot be specified and its IP may change after container restart.
Cannot support the designated IP range to create POD, and can only be distributed from the sub-network segment designated during initialization
The network capability provided by the IAAS infrastructure can not be directly used, and the specified POD subnet network segment is equivalent to a local area network segment inside the cluster and can be distributed at will in nature.
2、Calico
Calico is another popular network option in the kubernets ecosystem. While Flannel is recognized as the simplest choice, calico is known for its performance, flexibility. Calico is more fully functional, not only providing network connectivity between the host and the POD, but also relating to network security and management.
2.1 characteristics of use
The Calico CNI plug-in encapsulates Calico's functionality within the CNI framework. A pure three-layer network based on BGP (border gateway routing protocol), that is, having three layers of the protocol stack per machine to ensure three layers of connectivity between two containers, across host containers. And the network policy is supported to realize the access control of the network. And operating a vRouter on each machine, forwarding the data packet by using the kernel, and realizing the functions of a firewall and the like by means of iptables.
2.2, disadvantages
The host machine is required to be under the same 2-layer network, namely to be connected to a switch, and certain requirements are provided for the existing network.
The number of routes is the same as the number of containers, and it is very easy to exceed the processing capacity of routers, three-tier switches, and even worker nodes, thereby limiting the expansion of the entire network.
Each working node is provided with a large number of (mass) iptables rules and routes, and the operation, maintenance and troubleshooting difficulty is high.
The concept is such that it is impossible to support VPC, and the container can only get ip from the network segment set by the calico.
The network capabilities provided by the IAAS infrastructure cannot be used directly.
Our own infrastructure cloud platform prohibits the use of BGP for security concerns and because it may write to routing tables, which may affect existing networks.
3、Weave
Weaves creates a mesh overlay network between each node in the cluster by building a software-defined network layer on a network basis, which appears to be a local area network, but actually its bottom layer communicates over another network.
3.1 characteristics of use
The weaves create a bridge on the host to which each container is connected via a path _ pair, and the bridge has a container of the weaves router connected to it, which grabs the network packets via the interface connected to the bridge. One W (i.e., weaver) is deployed on each host (which may be a physical machine or a virtual machine) that deploys Docker.
The weaved network is formed by peer endpoints (peers) consisting of these weaved routers, each peer having its own name on one side, a unique identifier for running distinction from each other, and names which are MAC addresses by default even if the Docker host names are restarted.
Each host that deploys a weaver needs to open a firewall setting of 6783 ports of a Transmission Control Protocol (TCP) and a UDP, so as to ensure the passing of Control plane traffic and data plane traffic between weavers. The control plane is formed by TCP connection established between the weaves, and handshaking and exchange communication of topological relation information are carried out through the control plane. This communication may be configured as an encrypted communication. While the data plane consists of UDP connections established between weaves routers, most of these connections are encrypted. These connections are full duplex and can traverse firewalls.
3.2, disadvantages
Join the weave network only through weave launch or weave connect.
The weaves complete the process of encapsulation and decapsulation by capturing packets through the Pcap based on the data packets between the UDP bearer containers, and in the process, the data packets need to be copied from the kernel state to the user state, and then encapsulation and decapsulation are completed according to the customized format, which is problematic in efficiency.
From the viewpoint of performance and use, the packet unpacking method of the weave self-defined container data packet is not general enough, the transmission efficiency is low, and the loss on performance is large.
The cluster configuration is complex, the network topology needs to be manually constructed through the weave command line, and particularly in the case of large-scale clusters, the burden of an administrator is heavy.
IP creation PODs cannot be specified and their IP may change after container restart.
The specified IP range cannot be supported to create a POD, and only can be allocated from the subnet segment specified at initialization.
The network capabilities provided by the IAAS infrastructure cannot be used directly.
4、Terway
Terway is a CNI plug-in of an Ariicloud open source VPC network, supports VPC and ENI modes, and the latter can realize that a container network uses a VPC subnet network.
4.1, characteristics of use
POD requires access to the virtual switch through the elastic network card (ENI) of the aristoloc to build a POD network. Therefore, in addition to the node virtual switch to be selected, it is necessary to specify a virtual switch for POD communication; different aristoloc server (ECS) specifications can support different numbers of mounted flexible network cards, and the number of mounted flexible network cards of the ECS and the number of IPs supported by a single flexible network card directly determine the number of PODs that can be allocated to a node.
4.2, disadvantages
Obviously, although the plug-in is an open source, it can be seen from the characteristics that Terway can only be used on Ariiyun if the plug-in is used, that is, the plug-in is tightly bound with Ariiyun services, and cannot be used on other infrastructure cloud platforms. The disadvantages are also evident:
the plug-in can only be used on the Alice cloud, the manufacturer binding is dead, and the plug-in cannot be used by the private cloud.
Since binding with the ari cloud, the cloud can not be used on the infrastructure cloud platform built by OpenStack, and the private cloud can not be used by the network capability provided by OpenStack.
POD creation cannot be supported for a specified IP range.
The open source version does not yet support container fixed IP.
At present, the various mainstream K8s plug-ins cannot meet the requirements of the scenes due to respective reasons and limitations, and a perfect and universal solution is not provided.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a container control method based on an Openstack IaaS cloud architecture, which is used for applying network resources for PODs or deleting the network resources and comprises the following steps:
receiving request information for creating POD;
b, creating a sandbox based on the request information;
the TBway plug-in calls the TBway service to apply for network resources;
d, determining parameters corresponding to the network resources by the TBwayd service;
the TBwayd service applies to an OpenStack infrastructure platform to allocate network resources for the POD, and obtains corresponding network resources;
and f, the TBway plug-in sets the network resources into the network name space of the POD through the iplink.
Preferably, the method further comprises the following steps before the step a:
monitoring POD events, wherein the request information for creating POD is acquired based on the monitoring.
Preferably, the step of listening for POD events in step i includes the steps of:
step i1: the k8s cluster work node kubelet service listens to the API Server service on the host (master) node, thereby listening to the POD event.
Preferably, the step c includes the steps of:
step c1: the kubbelet calls an ADD interface of a TBway plug-in to process parameters related to the network resources;
step c2: kubelet calls TBwayd service through gRPC;
and c3: and the TBway plug-in calls the TBway d service to apply for the network resource.
Preferably, when the TBwayd service is called, the method further includes the following steps:
the TBwayd service authenticates to the keystone and obtains a token certificate.
Preferably, the token certificate is stored in memory or in a server.
Preferably, the step d comprises the steps of:
step d1: the TBwayd service acquires POD information of the cluster from the API Server service of k8 s;
step d2: and determining parameters corresponding to the network resources based on the POD information.
Preferably, the parameter corresponding to the network resource includes an IP address or an IP address range corresponding to the POD.
Preferably, the step e comprises the steps of:
step e1: the TBwayd service applies for creating a virtual network card corresponding to the POD to a neutron-server service;
step e2: and the TBwayd service requests the neutron-server service to mount the virtual network card on a working node virtual machine where the POD is located.
Preferably, the step e1 comprises the steps of:
step e11: the TBwayd service sends request information for creating the network resource to the neutron-server service application for creating the virtual network card;
step e12: the neutron-server completes the authentication processing of the request information;
step e13: and the neutron-plugin plug and the neutron-agent corresponding to the neutron-server finish the process of creating a virtual network card based on the request information, and create a virtual network card with an IP address for the POD.
Preferably, the method further comprises the following steps after the step f:
step A, receiving request information for deleting POD;
b, calling a CNI plug-in by a kubbelet component in kubbernees to initiate a CMDDEL event;
c, after the TBway-CNI plug-in receives the CMDDEL event, calling TBway service in a gRPC mode;
d, the TBwayd service acquires POD information from the K8S through gRPC call, and clears network resources in the POD, such as routing and deletion of a veth pair of path equipment;
e, the TBway-CNI plug-in moves the virtual network card from the network name space of the POD to the network name space of the host;
and F, calling TBwayd service by the TBway-CNI plug-in through the gPC, and releasing the newly moved virtual machine network card on the host.
Preferably, the step F further comprises the steps of:
step F1: the TBwayd service deletes the mapping relation between the POD and the virtual network card;
step F2: and putting the released virtual network card into an idle virtual network card queue.
Preferably, the control process for deleting the network resource in the control method further includes the following steps:
the resource pool cache management object periodically cleans up the unused virtual network cards in the virtual network card resource pool according to a certain rule, and reallocates the virtual network cards to ensure that a certain number of available virtual network cards exist in the pool.
Preferably, the control process of deleting the network resource in the control method further includes the following steps:
openstack is invoked to offload the binding of the virtual network card from the host, so as to specify that the virtual network card carrying the IP address can bind to other nodes when the POD drifts to other nodes.
According to another aspect of the present invention, there is also provided a container control apparatus based on an IaaS cloud architecture of Openstack, including: a memory for storing instructions executable by the processor; and the processor is used for executing the instructions to realize the container control method based on the Openstack IaaS cloud architecture.
According to another aspect of the present invention, a container of an IaaS cloud architecture based on Openstack is further provided, and the control method is adopted to apply for a network resource or delete a network resource for a POD.
Preferably, the container of the Openstack-based IaasIaaS cloud architecture includes the following modules:
-newly building a virtual network card module;
-deleting the virtual network card module;
-mounting a virtual network card module;
-uninstalling the virtual network card module;
-an IP address management module;
-a virtual network card resource pool management module; and
-a configuration information file management module.
Compared with the prior art, the scheme is based on a CNI standard protocol, a new container network plug-in TBway is realized, the plug-in can specify an IP when a POD is created except for realizing a basic CNI interface function, the IP is ensured not to be changed when the IP is reconstructed, and powerful support is provided for transplanting the traditional application to a containerized environment; and an IP range within which the POD applies for the IP can be specified; the network used by the container is applied to an IaaS platform of an OpenStack infrastructure, so that the container and the virtual machine are in the same layer, and the IaaS network is directly used because the container network is not in a forwarding layer, so that the performance is better.
Drawings
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below, wherein:
fig. 1 shows a flowchart for applying for network resource correspondence in a container control method based on an IaaS cloud architecture of an Openstack according to an embodiment of the present invention;
fig. 2 shows a flowchart for applying for network resource correspondence in a container control method based on an IaaS cloud architecture of an Openstack according to another embodiment of the present invention;
fig. 3 shows a flowchart corresponding to deleting a network resource in a container control method based on an IaaS cloud architecture of an Openstack according to still another embodiment of the present invention;
fig. 4 is a schematic timing diagram illustrating a time sequence corresponding to a network resource application in a container control method based on an IaaS cloud architecture of an Openstack according to still another embodiment of the present invention;
fig. 5 is a schematic timing diagram illustrating a time sequence corresponding to deleting a network resource in a container control method based on an IaaS cloud architecture of an Openstack according to yet another specific embodiment of the present invention;
fig. 6 shows a topology diagram of a container control method based on an Openstack IaaS cloud architecture according to another embodiment of the present invention;
fig. 7 is a topology diagram of a container control method based on an IaaS cloud architecture of Openstack according to another embodiment of the present invention; and
fig. 8 is a system block diagram of a container control apparatus based on an Openstack IaaS cloud architecture according to an embodiment of the present invention.
Detailed Description
The technical personnel in the field understand that the invention provides a convenient scheme for uniformly managing network resources in an internal IaaS system, and provides a specific process for effectively managing the network resources, creating and managing a virtual network card and a corresponding management system.
Specifically, referring to the embodiment shown in fig. 7, a request for creating a network resource, for example, a requirement for creating a virtual network card, may be received through monitoring of the k8s node provided by the present invention, and then the request for creating a network resource is initiated to the OpenStack infrastructure platform provided by the present invention through a TBway plug-in, and a virtual network card is created based on feedback of the OpenStack infrastructure platform, and finally the network card is loaded to a corresponding position, so that the virtual network card may be seen in the IaaS platform, and further, terminals and nodes in the entire IaaS may communicate with the virtual network card, thereby implementing processes of creating a network resource and deleting a network resource.
Further, with reference to the embodiment shown in fig. 6, it is described from another perspective that the container of the Openstack-based IaaS cloud architecture provided by the present invention is a k8s server cluster 61 communicating with the IaaS 62, where the k8s server cluster 61 includes a k8s server host 611 and a k8s server work node, and the k8s server work node includes a first work node 612, a second work node 613, and a third work node 614. The interaction relationship between the k8s server host 611 and the k8s server working node can be embodied by referring to the left and right working relationships in the embodiment shown in fig. 7, and the TBwayd realizes the functions provided by the present invention through the cooperation with the k8s server cluster. Further, the k8s server cluster 61 communicates with the IaaS 62 through various interfaces, such as an enix interface, an ethx interface, and the like.
Further, referring to the embodiment shown in fig. 7, the interaction flow of the k8s server host 71 and the k8s server work node 72 includes the following steps:
the kubbelet service 721 of the k8s Server working node 72 listens to the API Server service 711 on the k8s Server host 71, thereby listening to the POD event;
2.Kubelet service 721 receives request information for creating a POD, creates sandbox-to-PODs (POD namespace) 722 based on the request information;
the kubel service 721 calls the CNI interface of the TBway plug-in 723 to perform processing on the parameters related to the network resources;
the TBway plug-in 723 calls a TBway service 724 to apply for network resources;
the tbwayd service 724 acquires POD information of the cluster to the API Server service 711.
And 6, the TBwayd service 724 determines parameters corresponding to the network resources based on the POD information, and applies for creating a virtual network card corresponding to the POD from a neutron-server 731 of the OpenStack infrastructure platform 73.
The process flow of the OpenStack infrastructure platform 73 is as follows:
the neutron-server 731 implements processing of various requests by managing a message queue Q, interacts with a network-provider 733 and a network-database 734 through a neutron-agent 732, and the neutron-plug 735 and the keystone 736 authentication mechanism also work accordingly through a mechanism of a message queue. Specifically, after receiving the request, neutron-server 731 performs security check through keystone 736, and further processes the received request after the check is passed. After the verification is passed, the neutron-server 731 informs the registered neutron-plug 735 through the message queue Q, and the neutron-plug 735 saves the relevant information to the network-database 734 or deletes the information from the network-database 734 according to the request operation. The neutron-agent 732, neutron-agent 732 running on each OpenStack network node is then still informed via the message queue that a specific network creation or destruction action is performed on the node. If the network-provider 733 is used, the neutron-agent 732 will issue a request to the corresponding network-provider 733 to perform the actual operation.
As shown in fig. 7, the interaction flow of the k8s server host 71 and the k8s server work node 72 further includes a step 7. The tbway plug-in 723 binds the virtual network card into the pods 722 through the iplink setting.
Further, referring to the embodiments shown in fig. 1 and fig. 4, a flowchart corresponding to a network resource is applied in a container control method based on an IaaS cloud architecture of Openstack according to a specific embodiment of the present invention. Specifically, the method comprises the following steps:
firstly, step S101 is executed, that is, the k8S cluster work node kubel service listens to the API Server service on the host node, so as to listen to the POD event.
Step S102 is entered next, i.e., request information for creating a POD is received. Through the monitoring process, the request information can be received, and subsequent steps are started.
Then, step S103 is executed, that is, the kubel calls the ADD interface of the TBway plug-in to perform the processing of the parameters related to the network resource.
Step S104 is entered next, i.e., kubel calls the TBwayd service via a gRPC (google Remote Procedure Call).
Then, step S105 is executed, that is, the TBway plug-in calls the TBway service to apply for the network resource.
Step S106 is then entered, i.e. the TBwayd service acquires POD information of the cluster from the API Server service of k 8S.
Next, step S107 is executed, that is, a parameter corresponding to the network resource is determined based on the POD information.
And step S108 is carried out, namely the TBwayd service applies for creating the virtual network card corresponding to the POD from the neutron-server service.
And finally, executing step S109, that is, the TBwayd service requests the neutron-server service to mount the virtual network card to the virtual machine of the working node where the POD is located.
Further, after step S109, step S110 is included, namely, the TBway plug-in sets the network resource in the network namespace of the POD via the iplink. The step S110 is not shown in fig. 1.
Those skilled in the art understand that the TBway plug-in and the TBway service are services developed autonomously according to open source code, that is, processes for requesting and deleting network resources are implemented.
Further, those skilled in the art will understand that the above processes of fig. 1 and 4 can be summarized as the following steps:
i. snooping POD events
a. Receiving request information for creating a POD;
b. creating a sandbox based on the request information;
the TBway plug-in calls TBway service to apply for network resources;
the TBwayd service determines parameters corresponding to the network resources;
the TBwayd service applies to an OpenStack infrastructure platform to allocate network resources for the POD, and obtains corresponding network resources; and
and f, the TBway plug-in sets the network resource in a network name space of the POD through the iplink.
Through the operation flows shown in fig. 1 and fig. 4, the network resource corresponding to the request can be created. In a variation of the foregoing implementation method, the request information may have special requirements for an IP address, an IP name, and the like, and the TBwayd service accordingly creates a virtual network card including the special requirements according to the requirements.
Further, in the embodiments shown in fig. 1 and fig. 4, when the TBwayd service is called, the method further includes the following steps: the TBwayd service authenticates the keystone and obtains a token certificate. In a further embodiment, the token certificate is saved in memory or in a server. Those skilled in the art understand that through the authentication process, the security of the IaaS system provided by the present invention can be increased.
Further, in another embodiment, referring to fig. 2 and fig. 7, preferably, the step S108 in fig. 1 includes the following steps:
step S1081: the TBwayd service sends request information for establishing network resources to the neutron-server service to apply for establishing a virtual network card;
step S1082: the neutron-server completes the authentication processing of the request information;
step S1083: and the neutron-plugin plug and the neutron-agent corresponding to the neutron-server finish the process of creating a virtual network card based on the request information, and create a virtual network card with an IP address for the POD.
The above embodiment shows a workflow between the TBwayd service and an Openstack base platform in the control method provided by the present invention, where the Openstack base platform is a platform adapted to the TBwayd service, and responds to the TBwayd service.
Further, referring to fig. 3, fig. 5, and fig. 7, which show a flowchart for deleting a network resource in a container control method based on an IaaS cloud architecture of an Openstack according to still another specific embodiment of the present invention, specifically including the following steps:
step S201: receiving request information for deleting PODs;
step S202: calling a CNI plug-in by a kubbelet component in kubberenets to initiate a CMDDEL event;
step S203: after the TBway-CNI plug-in receives the CMDDEL event, calling TBway service in a gRPC mode;
step S204: the TBwayd service acquires POD information from the K8S through the call of the gRPC, and clears network resources in the POD, such as routing and deletion of a veth pair of path devices;
step S205: the TBway-CNI plug-in moves the virtual network card from the network name space of the POD to the network name space of the host;
step S206: and the TBway-CNI plug-in calls TBway service through the gPRC to release the newly moved virtual machine network card on the host machine.
Those skilled in the art understand that through the above process, the deletion of the network resources in the container control method for the Openstack-based IaaS cloud architecture is completed. The process is basically the same as the principle of the flow shown in fig. 1, and is not described in detail.
Further, the step S206 includes the following steps:
step F1: the TBwayd service deletes the mapping relation between the POD and the virtual network card;
step F2: and putting the released virtual network card into an idle virtual network card queue.
In another variation, preferably, the control process for deleting the network resource in the control method further includes the following steps:
the resource pool cache management object periodically cleans up the unused virtual network cards in the virtual network card resource pool according to a certain rule, and reallocates the virtual network cards to ensure that a certain number of available virtual network cards exist in the pool.
Preferably, the control process for deleting the network resource in the control method further includes the following steps:
openstack is invoked to offload the binding of the virtual network card from the host, so as to specify that the virtual network card carrying the IP address can bind to other nodes when the POD drifts to other nodes.
Further, referring to the embodiments shown in fig. 4 to fig. 7, fig. 4 shows the sequence of creating the network resource, specifically, creating the network resource requires interaction among the TBway plug-in 41, the TBway service 42, the k8s server 43, the TBway-DB (TBway database) 44, the virtual network card manager 45, the virtual network card resource pool 46, and the Openstack 47. In fig. 4, solid arrows indicate requests, and dashed arrows indicate responses. In this embodiment, the network resource is requested by calling the gRPC through the TBway plug-in 41, and accordingly, the TBway service 42 issues a request for querying POD information to the k8s server 43, specifically, the TBway service 42 queries the TBway-DB44 whether the POD exists and obtains corresponding feedback. If the fixed IP exists, a request to create the corresponding network resource is directly issued to Openstack 47. If the fixed IP does not exist, a request for allocating a virtual network card is sent to the virtual network card manager 45, the virtual network card is created, the virtual network card resource pool 46 judges whether the virtual network card before the POD has been recycled, if not, the virtual network card is directly returned, if the virtual network card has been recycled, whether an idle virtual network card exists is judged, if an idle virtual network card exists, the idle virtual network card is returned, and if no virtual network card exists, the Openstack47 is applied for creating a virtual network card and returns the virtual network card to the TBway service. After that, the virtual network card is set to enter the POD namespace.
Further, fig. 5 shows a timing diagram of deleting a network resource, in this embodiment, the TBway plug-in may be a TBway-CNI plug-in, and as shown in fig. 5, deleting a network resource requires interaction among the TBway-CNI plug-in 51, the TBway service 52, the k8s server 53, the TBway-DB54, the virtual network card manager 55, the virtual network card resource pool 56, and the Openstack 57. Where solid arrows represent requests and dashed arrows represent responses. Specifically, in this embodiment, a TBway-CNI plug-in calls a gRPC to request to release a network resource, and correspondingly, the TBway service 52 sends a request to the k8s server 53 to query POD information, and accordingly deletes the virtual network card from the POD namespace, and then the gRPC requests to release the virtual network card, and accordingly deletes the network card information from the TBway-DB54, and calls an Openstack57 to finally delete the network card information.
In the embodiment shown in fig. 5, the virtual network card manager 56 periodically checks whether there is free POD information and whether there is POD information that is not bound and allocated to the virtual network card, and accordingly places the free virtual network card into the virtual network card resource pool 56.
Fig. 8 is a system block diagram of a container control apparatus based on an IaaS cloud architecture of Openstack according to an embodiment of the present invention. Referring to fig. 8, the container control apparatus 800 may include an internal communication bus 801, a processor 802, a Read Only Memory (ROM) 803, a Random Access Memory (RAM) 804, and a communication port 805. When used on a personal computer, the container control apparatus 800 may also include a hard disk 806. The internal communication bus 801 may enable data communication among the components of the container control apparatus 800. The processor 802 may make the determination and issue the prompt. In some embodiments, the processor 802 may be comprised of one or more processors. The communication port 805 can enable data communication between the container control apparatus 800 and the outside. In some embodiments, the container control device 800 may send and receive information and data from a network through the communication port 805. The container control apparatus 800 may also include various forms of program storage units and data storage units such as a hard disk 806, read Only Memory (ROM) 803 and Random Access Memory (RAM) 804 capable of storing various data files for computer processing and/or communication, and possibly program instructions for execution by the processor 802. The processor executes these instructions to implement the main parts of the method. The results processed by the processor are communicated to the user device through the communication port and displayed on the user interface.
The above-described operation method may be implemented as a computer program, stored in the hard disk 806, and loaded into the processor 802 to be executed, so as to implement the container control method of the present application.
Further, those skilled in the art understand that, in another embodiment, a container based on an Openstack ias cloud architecture is further provided, which preferably applies for a network resource or deletes a network resource for a POD using the embodiments shown in fig. 1 to fig. 7 described above. Those skilled in the art understand that, preferably, the container of the Openstack-based IaaS cloud architecture includes the following modules:
-newly building a virtual network card module;
-deleting the virtual network card module;
-mounting a virtual network card module;
-uninstalling the virtual network card module;
-an IP address management module;
-a virtual network card resource pool management module; and
-a configuration information file management module.
Correspondingly, the functional implementation of each module may refer to the embodiments shown in fig. 1 to fig. 7 and the corresponding description, and is not repeated.
Further, as understood by those skilled in the art, in a preferred embodiment, the CNI plug-in interface is implemented according to the CNI specification standard for container network plug-ins: and three interfaces of adding a network card, releasing the network card and CNI plug-in information version are realized. This block is standard to implement the three interfaces described above.
Those skilled in the art understand that the TBwayd service provided by the present invention is preferably implemented mainly by:
1. newly building a virtual network card from OpenStack;
2. deleting the virtual network card;
3. mounting a virtual network card;
4. unloading the virtual network card;
5. reconstructing POD fixed IP unchanged;
6. assigning a network to the container within the specified IP range;
7. managing a virtual network card resource pool;
8. configuration information file management functions, and the like.
However, the functions of the TBwayd service are not limited to the above functional points and implementations.
The overall architecture of the invention is shown in fig. 7, a kubelet process on each working node monitors k8s-API Server service, monitors PODs events, senses when the POD life cycle changes, and then determines the action to be carried out next step, which is the standard flow of k8 s. The self-developed TBway plug-in is divided into two parts, the TBway plug-in is deployed and operated on each working node of k8s, the TBway plug-in realizes ADD and DEL interfaces for kubel calling based on CNI standard, and TBway service is called after parameters are processed to realize specific logic of applying for a network, deleting the network and the like. The TBwayd is a main business processing logic service, and applies for processing network resources from an OpenStack infrastructure cloud platform. And the functions of POD fixed IP, the function of assigning an IP range to POD for use and the like which are realized by the user are all made in TBwayd.
The TBwayd sends a request to a neutron-server service of the OpenStack platform to acquire or delete the network resource of the POD, security check is carried out through the keystone in the request process, and further processing is carried out after the check is passed. And after receiving the request, the neutron-server informs the registered neutron-plugin through a message queue, and the neutron-plugin saves the related information to the OpenStack network database or deletes the information from the database according to the request operation. The neutron-agent running on each OpenStack network node is then informed, still via a message queue, that the neutron-agent performs a specific network creation or destruction action on the node. If a network-provider is used, the neutron-agent will issue a request to the corresponding network-provider device to perform the actual operation.
The following takes creating a POD as an example to describe specific processing flow steps, and basically the same is true when deleting the POD, except that one is to apply for resources and the other is to release resources:
1. the method comprises the steps that a k8s cluster work node kubbelet service monitors an API Server service on a host (master) node, monitors PODs events, and senses the events when a user initiates POD creation;
2. a sandbox is created through a standard flow of k8 s;
3. when the POD network resource application is related, kubel calls an ADD interface of a TBway plug-in to process some parameters, and then calls TBway to perform specific processing through gRPC;
4. the TBway plug-in applies for network resources to calling TBway;
5. the TBwayd service simultaneously acquires POD information of the cluster including annotation information and the like from the API Server service of k8 s. Our TBwayd will take these annotation information and the administrator can specify the IP or IP range of the POD to be created by the annotation in the yaml file, and TBwayd will pass this information to the neutron-server of Opensatck after parsing it into IP information. If the IP designated by the administrator is a range, the code will randomly select an IP from the range for the single-copy POD and then pass on, and if the IP is a multi-copy POD, the code will assign the IP from the range of the IP for the copies respectively and then pass on. If the administrator specifies an IP range but the number of copies is more than the given number of IPs, an error is directly given here. If the administrator does not specify an IP, the process is a normal application process, and finally the OpenStack automatically allocates an IP to the network card for use.
6. And after the network information is processed by the TBwayd, applying to an OpenStack infrastructure platform to allocate network resources for POD. In a preferred embodiment, this is achieved in particular by the following steps:
6.1, applying for a virtual network card. That is, allocating network resources to OpenStack requests actually includes multiple steps, and the following are all operating logic on the OpenStack platform:
6.1.1, firstly, TBwayd sends a request for establishing a network to a neutron-server service application for establishing a virtual network card;
6.1.2, for the OpenSatck platform, all the incoming requests need to be authenticated, and after receiving the request, the neutron-server will authenticate the keystone service first, and then where the user password of the OpenSatck platform needed for authentication comes from? The configuration is injected by the administrator in our code through ConfigMap. For simplification, the token certificate is acquired and stored in the memory when the TBwayd service is initialized and started, and the token acquisition update is periodically performed every 40 minutes. Therefore, other requests can directly get the authenticated token and then perform request processing.
6.1.3, after receiving the request for creating the virtual network card, the neutron-server informs the registered neutron-plugin through a message queue.
6.1.4, the neutron-plugin plug-in stores the network information to be created into a corresponding database and informs a neutron-agent running on the network node through a message queue.
6.1.5, after receiving the information, the neutron-agent also performs some database processing, then creates a network card device on the network node, and if the network node is a provider network, applies for creating the network card device from the provider device.
After the application of the OpenStack for creating the network card is successful, a virtual network card with an IP specified by an administrator is finally presented, or the network card with an IP automatically allocated to the OpenStack is automatically presented when the IP is not specified by the administrator.
6.2, mounting the virtual network card.
The network card applied in the step 6.1 is also an independent network resource on the IaaS platform, and the POD layer cannot be used yet. The virtual network card is also required to be mounted on a virtual machine of a working node where the POD is located. The mounting process is also actively initiated through TBwayd, and a service request is sent to the neutron-server, which is almost the same as the above process, but is different from the specific operation resource mode. And after the mounting is successful, an additional network card is added on the virtual machine of the working node where the POD is located.
A network card is added on a working node virtual machine where a POD is located, then a TBway plug-in takes over subsequent processing, and the network card is set to be in a network name space of the POD through an iplink, namely the network card is bound to the POD. At this time, the working node where the POD is located cannot see the network card, but can see the network card inside the POD.
Through the above processing steps, the network capability provided by the OpenStack infrastructure platform can be directly used at the final POD level, and the POD and the virtual machine are in the same plane. Meanwhile, in the processing process, the appointed IP transmission is analyzed according to the annotation information, and whether the appointed IP application or the OpenStack platform is assigned when the network information is applied to the OpenStack is determined. When the POD is rebuilt, the code firstly confirms whether the network card of the specified IP exists and is idle and unused to the OpenStack platform, if the network card is idle, the network card is directly used, otherwise, the network card is informed to the user, and therefore the requirement that the POD is unchanged in fixed IP is achieved.
When the POD is deleted, the flow is basically consistent with that of the POD, but the TBway plug-in first unbundles the network card from the POD, and then the TBway applies to the OpenStack for unloading the network card. For example, in a preferred embodiment, the application for deleting a network resource may be accomplished by:
1. when a Pod is deleted, a kubel component in kubernets calls a CNI plug-in to initiate a CMDDEL event;
2. after the TBway-CNI plug-in receives the CMDDEL event, calling TBway back-end service in a gRPC mode;
3. the TBwayd calls through the gRPC to acquire Pod information from the K8S, and clears network resources in the Pod, such as a route and a veth pair of path equipment;
4. the TBway-CNI plug-in moves the virtual network card from the network name space of the pod to the network name space of the host;
5. and the TBway-CNI plug-in unit calls TBway service through the gPRC again to release the newly moved virtual machine network card on the host machine. The TBwaxed release network card comprises the following operations:
5.1, deleting the mapping relation between the pod and the virtual network card by the TBwayd;
5.2, putting the released virtual network card into an idle virtual network card queue;
5.3, the resource pool cache management object can periodically clear unused network cards in the virtual network card resource pool according to a certain rule, and can reallocate the virtual network cards to ensure that a certain number of network cards exist in the pool;
5.4, for a scene of a specified IP, after the virtual network card is placed in the resource pool, openStack call is also initiated immediately, and the binding of the virtual network card is unloaded from the host machine, so that when the specified Pod drifts to other nodes, the virtual network card carrying the IP address can be bound to other nodes.
From the above analysis, we see that the capabilities provided by various plug-ins are each characterized. Enterprises and users can select different plug-ins according to the requirements of own business scenes, and according to the current business scenes, a network plug-in is required to realize the following functions:
1. the system has a basic CNI plug-in function, and can add a network to a container, delete the network and the like;
2. supporting the reconstruction of the container and fixing the IP;
3. supporting the creation of containers within a specified IP range;
4. the network provided by an OpenStack infrastructure (IaaS) platform can be directly used, and the container and the virtual machine are positioned on the same layer when the network is used;
the K8s inserts of the above-mentioned various mainstream cannot fully satisfy the above-mentioned four functions for their respective reasons and limitations. Aiming at the pain points, the container network solution based on the IaaS cloud architecture of OpenStack is provided for meeting business requirements.
The scheme is based on a CNI standard protocol, and realizes a new container network plug-in TBway, which can specify IP when creating POD and ensure that IP is not changed when reconstructing IP, except for realizing a basic CNI interface function; and an IP range within which the POD applies for the IP can be specified; the network used by the container is applied to the IaaS platform of the OpenStack infrastructure, so that the container and the virtual machine are in the same layer, and the container and the virtual machine are used in a little difference from the application layer.
In order to achieve flexibility as much as possible, configuration information required by plug-in initialization is injected through files such as native ConfigMap and yaml of K8S, an administrator can flexibly configure and interface different IaaS environments, and can specify an IaaS network segment used by the whole container network. And an IP administrator specifying that a POD is fixed may also be specified by the profile.
The plug-in is based on the CNI protocol, is almost designed with the mainstream plug-in architecture, and can be well expanded.
The network provided by the OpenStack infrastructure IaaS can be used on other cloud platforms theoretically, but another set of interface is realized according to specific cloud platform application network cards and uninstallation network cards, and the network is flexible and changeable.
Obviously, the solution itself is developed only when the functions of other mainstream network plug-in schemes can not completely meet the actual needs of us, the new functional points of the solution are all the advantages that other plug-ins do not have, and the solution also uses for reference some design concepts of mainstream plug-ins:
POD fixed IP invariance is supported;
supporting the application of network resources for the container in a specified IP range;
the method supports the direct use of the network capability provided by the open source OpenStack infrastructure-based cloud platform, and is not ecologically bound with cloud manufacturers;
the method supports the injection of IaaS environment information, network information and the like in a configuration mode, and an administrator can be flexible and changeable according to actual needs;
the security authentication based on the OpenStack platform is supported, and the request authority authentication of the OpenStack platform is required when network resources are applied for the POD.
The key point is that the network capability provided by OpenStack can be directly used, the same network is used for a container and a virtual machine on a platform level, the container and the virtual machine are on the same level, and the container network directly uses an IaaS network because the container network is not on a forwarding layer, so that the performance is better. And the POD IP can be fixed as required, which is especially important in some traditional financial applications of our country, and provides powerful support for the migration of traditional applications into containerized environments.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (16)

1. A container control method based on an Openstack IaaS cloud architecture is used for applying network resources for PODs or deleting the network resources, and is characterized by comprising the following steps:
step a: receiving request information for creating a POD;
step b: creating a sandbox based on the request information;
step c: the TBway plug-in calls the TBway service to apply for network resources;
step d: the TBwayd service determines parameters corresponding to the network resources;
step e: the TBwayd service applies to an OpenStack infrastructure platform to allocate network resources for the POD, and obtains corresponding network resources;
step f: and the TBway plug-in sets the network resources in the network name space of the POD through the iplink.
2. The control method according to claim 1, characterized by further comprising, before the step a, the steps of:
step i: and monitoring a POD event, wherein the request information for creating the POD is acquired based on the monitoring.
3. The control method according to claim 2, wherein said step of listening for POD events in said step i comprises the steps of:
step i1: the k8s cluster worker node kubel service listens to an API Server service on a host (master) node, thereby listening to the POD event.
4. The control method according to claim 1, wherein the step c includes the steps of:
step c1: the kubbelet calls an ADD interface of a TBway plug-in to process parameters related to the network resources;
and c2: kubelet calls TBwayd service through gRPC;
and c3: and the TBway plug-in calls the TBway d service to apply for the network resource.
5. The control method according to any one of claims 1 to 4, further comprising, when the TBwayd service is called, the steps of:
the TBwayd service authenticates to the keystone and obtains a token certificate.
6. Control method according to claim 5, characterized in that the token certificate is saved in a memory or in a server.
7. The control method according to claim 1, wherein the step d includes the steps of:
step d1: the TBwayd service acquires POD information of the cluster from the API Server service of k8 s;
and d2: and determining parameters corresponding to the network resources based on the POD information.
8. The method according to claim 1, wherein the parameter corresponding to the network resource comprises an IP address or an IP address range corresponding to the POD.
9. The control method according to claim 1, wherein the step e includes the steps of:
step e1: the TBwayd service applies for creating a virtual network card corresponding to the POD to a neutron-server service;
step e2: and the TBwayd service requests the neutron-server service to mount the virtual network card to a virtual machine of a working node where the POD is located.
10. The control method according to claim 9, wherein the step e1 includes the steps of:
step e11: the TBwayd service sends request information for establishing network resources to the neutron-server service to apply for establishing a virtual network card;
step e12: the neutron-server completes the authentication processing of the request information;
step e13: and the neutron-plugin plug-in and the neutron-agent corresponding to the neutron-server complete the process of creating a virtual network card based on the request information, and create a virtual network card with an IP address for the POD.
11. The control method according to claim 1, further comprising, after the step f, the steps of:
step A: receiving request information for deleting PODs;
and B, step B: calling a CNI plug-in by a kubbelet component in kubberenets to initiate a CMDDEL event;
step C: after the TBway-CNI plug-in receives the CMDDEL event, calling TBway service in a gRPC mode;
step D: the TBwayd service acquires POD information from the K8S through the call of the gRPC, and clears network resources in the POD, such as routing and deletion of a veth pair of path devices;
step E: the TBway-CNI plug-in moves the virtual network card from the network name space of the POD to the network name space of the host;
step F: and the TBway-CNI plug-in calls TBway service through the gPRC to release the newly moved virtual machine network card on the host machine.
12. The control method according to claim 11, wherein the step F further includes the steps of:
step F1: the TBwayd service deletes the mapping relation between the POD and the virtual network card;
step F2: and putting the released virtual network card into an idle virtual network card queue.
13. The control method according to claim 11 or 12, characterized by further comprising the steps of:
the resource pool cache management object periodically cleans up the unused virtual network cards in the virtual network card resource pool according to a certain rule, and reallocates the virtual network cards to ensure that a certain number of available virtual network cards exist in the pool.
14. The control method according to claim 13, characterized by further comprising, after the step, the step of:
openstack is invoked to offload the binding of the virtual network card from the host, so as to specify that the virtual network card carrying the IP address can bind to other nodes when the POD drifts to other nodes.
15. A container control device based on an Openstack IaaS cloud architecture is characterized by comprising:
a memory for storing instructions executable by the processor; a processor for executing the instructions to implement the method of any one of claims 1-14.
16. A container based on an Openstack IaaS cloud architecture, wherein the POD applies for a network resource or deletes a network resource by using the control method according to any one of claims 1 to 13, and the container comprises the following modules:
-newly building a virtual network card module;
-deleting the virtual network card module;
-mounting a virtual network card module;
-uninstalling the virtual network card module;
-an IP address management module;
-a virtual network card resource pool management module; and
-a configuration information file management module.
CN202210970313.4A 2022-08-12 2022-08-12 Openstack-based container control method and device for IaaS cloud architecture and container Pending CN115334018A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210970313.4A CN115334018A (en) 2022-08-12 2022-08-12 Openstack-based container control method and device for IaaS cloud architecture and container

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210970313.4A CN115334018A (en) 2022-08-12 2022-08-12 Openstack-based container control method and device for IaaS cloud architecture and container

Publications (1)

Publication Number Publication Date
CN115334018A true CN115334018A (en) 2022-11-11

Family

ID=83923648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210970313.4A Pending CN115334018A (en) 2022-08-12 2022-08-12 Openstack-based container control method and device for IaaS cloud architecture and container

Country Status (1)

Country Link
CN (1) CN115334018A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115801733A (en) * 2023-02-02 2023-03-14 天翼云科技有限公司 Network address allocation method and device, electronic equipment and readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115801733A (en) * 2023-02-02 2023-03-14 天翼云科技有限公司 Network address allocation method and device, electronic equipment and readable medium

Similar Documents

Publication Publication Date Title
US11812362B2 (en) Containerized router with a disjoint data plane
US6970902B1 (en) Method and apparatus for providing a distributed service in a network
CN108287723B (en) Application interaction method and device, physical machine and system
US20190319847A1 (en) Cross-regional virtual network peering
CN108062482B (en) Method and apparatus for providing virtual security appliance architecture to virtual cloud infrastructure
US20170257269A1 (en) Network controller with integrated resource management capability
WO2019237588A1 (en) Linux virtual server creation method, device, computer device and storage medium
US7415512B1 (en) Method and apparatus for providing a general purpose computing platform at a router on a network
EP3664420B1 (en) Managing address spaces across network elements
US10771309B1 (en) Border gateway protocol routing configuration
US8572284B2 (en) Method and apparatus for registering a mobile object on a foreign network
US10237235B1 (en) System for network address translation
CN114237812A (en) Container network management system
CN110336730B (en) Network system and data transmission method
WO2018006704A1 (en) Public network ip allocation method and apparatus, and virtual data center system
CN112187532A (en) Node control method and system
US11729026B2 (en) Customer activation on edge computing environment
CN115334018A (en) Openstack-based container control method and device for IaaS cloud architecture and container
CN114448978A (en) Network access method, device, electronic equipment and storage medium
CN114448937A (en) Access request response method and device and storage medium
US11907253B2 (en) Secure cluster pairing for business continuity and disaster recovery
Grasa et al. Seamless network renumbering in rina: Automate address changes without breaking flows!
US10079725B1 (en) Route map policies for network switches
CN115941455A (en) Method and communication device for intercommunication between cloud network and operator network of data center
WO2023098645A1 (en) Container network configuration method and apparatus, computing node, master node, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination