CN113821268B - Kubernetes network plug-in method fused with OpenStack Neutron - Google Patents

Kubernetes network plug-in method fused with OpenStack Neutron Download PDF

Info

Publication number
CN113821268B
CN113821268B CN202010561627.XA CN202010561627A CN113821268B CN 113821268 B CN113821268 B CN 113821268B CN 202010561627 A CN202010561627 A CN 202010561627A CN 113821268 B CN113821268 B CN 113821268B
Authority
CN
China
Prior art keywords
network
container
kubernetes
neutron
openstack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010561627.XA
Other languages
Chinese (zh)
Other versions
CN113821268A (en
Inventor
周峰
吕智慧
吴杰
童宇
冯晨昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202010561627.XA priority Critical patent/CN113821268B/en
Publication of CN113821268A publication Critical patent/CN113821268A/en
Application granted granted Critical
Publication of CN113821268B publication Critical patent/CN113821268B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • G06F9/44526Plug-ins; Add-ons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention belongs to the technical field of cloud computing, and particularly relates to a Kubernetes network plug-in method fused with OpenStack Neutron. The invention comprises the following steps: a container network plug-in based on Neutron is designed, a CNI model interface is realized for Neutron from a CNI container network model of Kubernetes, the plug-in is used in the Kubernetes, a container network is built in a virtual network of Neutron, and network fusion of the Kubernetes and OpenStack is realized. The invention provides a solution for realizing service in Kubernetes based on a Load bearer instance on the basis of a network plug-in, converts the service in Kubernetes into the Load bearer instance based on Octavia items of OpenStack, provides an access entrance with external stability for a back-end container, realizes front-end and back-end decoupling, and improves the reliability of the system. The method and the device can solve the problem of fusion of network resources in an application scene of fusion of the virtual machine and the container resources, and ensure the resource throughput of the cloud platform under the condition of high concurrency.

Description

Kubernetes network plug-in method fused with OpenStack Neutron
Technical Field
The invention belongs to the technical field of cloud computing, relates to a container network plug-in, and particularly relates to a Kubernetes network plug-in method fused with OpenStack Neutron.
Background
It is reported that as the internet has been incorporated into people's daily lives, cloud computing technology will also continue to develop in the future. With the development of container technology, container technology and virtualization technology have become a widely accepted way of sharing resources by a server of container technology, and container technology can provide great flexibility for operators in constructing an operating system instance of container technology on demand.
Practice shows that while related technology is developed, safety is also an urgent problem to be solved, and how to ensure data safety in a cloud platform environment and prevent information leakage and loss caused by network attack is an important challenge facing the cloud computing technology.
The prior art discloses a main function of a large cloud computing cluster management platform, which comprises the steps of managing a plurality of distributed computer physical nodes, automatically distributing physical resources such as computing, storage, network and the like according to the requirements of users, wherein the experience of using a cloud computing environment is the same as that of using a single large server for the users, and the difference is that the cloud environment can dynamically adjust the divided resource quantity according to different requirements or the application of the users, so that the resources can be more efficiently utilized.
As a new generation of lightweight cloud computing technology, the container technology adopts an operating system-level virtualization technology, docker is a typical representative of the operating system-level virtualization technology, and the method constructs a namespace mechanism of a Linux system, does not need repeated configuration of an operation environment due to good encapsulation, avoids the error that an application cannot be normally deployed and operated due to inconsistent operation environments, and almost becomes a terminology of the container technology. The container technology discussed below of the present application, unless specifically indicated, is the Docker technology.
The advent of container virtualization technology has well addressed the above issues. The Docker provides a complete and independent running environment for each application, has an independent file system, but shares the kernel of the operating system, so that the performance consumption of a virtualization layer caused by the fact that the virtual machine is carried with a complete client operating system is reduced on the basis of ensuring the isolation caused by the virtual machine. The cloud computing technology based on the container is becoming a mainstream technology in the industry in the future, and currently, a fused computing mode of the container technology and the virtual machine technology is tried, so that the data analysis efficiency can be improved, the service flexibility is enhanced, and the method is the next direction of big data technology development.
OpenStack and Kubernetes are currently mainstream virtualized resource platforms, and communities are very active, but the two are obviously different. OpenStack is a representation of a virtual machine technology platform, while Kubernetes is a representation of a containerization technology. Practice shows that through years of development, the virtual machine technology and the container technology have advantages and disadvantages and unique application scenes. Under general, the industry is known, and the Hypervisor-based virtual machine technology is considered to have an independent guest operating system, so that applications and software running on the guest operating system share a completely independent and isolated physical resource pool, the architecture is safe, the isolation is high, the container technology is isolated by depending on a host operating system, the starting speed is high, the cost is low, and the quick deployment is easy.
The virtualization technology based on the virtual machine and the container has various advantages, and it can be expected that the virtual machine and the container are necessarily coexisted in the cloud data center environment in the future, how to realize the fusion deployment of the virtual machine and the container is a key research direction of the current cloud platform construction, and how to realize the fusion and management of network resources under the fusion deployment platform of the virtual machine and the container is also the problem to be solved by the application.
Disclosure of Invention
The invention aims to provide a container network plug-in based on the basis and the current situation of the prior art, and particularly relates to a Kubernetes network plug-in method fused with OpenStack Neutron.
The invention combines the congenital advantages of the Neutron component of OpenStack in the aspect of virtualized network and the development of Kubernetes in the aspect of container cluster arrangement management, performs deep fusion, and provides a flexible and reliable network solution for a fusion platform of container application deployment. The invention is based on the network scheme, and can provide simple and effective functions of deploying and managing container applications for developers and testers in the earlier stage of the project and operation and maintenance personnel after the project is online. In the fusion platform of the container virtual machine, the virtual machine and the virtual network of the container are realized by Neutron, so that the characteristic of network resource fusion is reflected. Based on the method, the functions of monitoring container arrangement, fault replacement, elastic expansion and contraction and load balancing can be fully exerted by Kubernetes.
In the application, based on the prior art, kubernetes uses a relatively simple network architecture, and each container group is allocated to an IP address belonging to a physical machine dock bridge, so that the container groups located in the same physical machine node can be mutually communicated; in a small-scale cloud computing environment, the mode can be used, development cost is reduced, management difficulty is reduced, but in a large-scale and complex cloud environment, the solution depending on a physical machine network cannot meet the requirements of actual projects; therefore, the application provides a design scheme of a container network plug-in for Kubernetes based on a Neutron virtual network.
Specifically, the aim of the invention is achieved by the following technical scheme:
The invention provides a container network plug-in, in particular to a Kubernetes network plug-in method fused with OpenStack Neutron, which is characterized in that a flexible and reliable network solution is provided for a fusion platform for container application deployment by combining a Neutron component of OpenStack and Kubernetes in container cluster arrangement management; the method specifically comprises the following steps:
(1) Network fusion of virtual machines and containers
For containers created by OpenStack, kubernetes can take over and publish container applications, and the fusion of the two enables efficient container orchestration scheduling;
(2) Implementing Kubernetes service discovery mechanism
Creating Kubernetes service services based on the OpenStack load balancing instance, and providing the service discovery function outwards;
(3) Tenant isolation
The Pods under the same tenant can be accessed mutually, and the Pods under different tenants are invisible mutually, so that the isolation of a platform network is ensured;
(4) Optimization of load balancing strategy
Based on the load of the back-end container, a dynamic load balancing strategy is realized.
In the invention, the virtual machine is fused with the container network, and the Neutron container network plug-in can realize the conversion from pod and service resource objects in Kubernetes to OpenStack;
The plug-in function is divided into two modules: wherein,
Control module (Controller): monitoring creation, updating and deleting events about pod and service in Kubernetes, and calling a working module to perform corresponding operation in a Neutron;
work module (Neutron CNI): and calling a Neutron API to finish the creation of the container network and the service.
In the invention, the container network plug-in method can realize a Kubernetes service discovery mechanism, wherein Load Balancer examples replace the forwarding rule and specific implementation of the iptables responsible for traffic, the method also provides a foundation for subsequent function expansion, the Load Balancer also provides a virtual VIP as an external access service entrance, and the role of an endpoint is acted by a port in a Neutron and is responsible for the connection of a container and a network.
In the invention, the tenant isolation utilizes a Neutron component of an OpenStack platform, and forms a virtual SDN with rich functions from a two-layer network and a three-layer network based on network bridge functions of an Open-vSwitch and a Linux kernel, wherein a Network Namespace mechanism adopted by a Neutron network bottom layer is mainly used for providing a whole set of virtual network equipment for each virtual network, and a set of independent virtual network environment is provided for each user.
In the invention, the load balancing strategy is optimized by dynamically planning the flow forwarding rule according to the real-time load of the back-end system.
The invention provides a Kubernetes network plug-in method fused with OpenStack Neutron, which comprises the following steps: the designed container network plug-in based on the Neutron starts from a CNI container network model of the Kubernetes, realizes a CNI model interface for the Neutron, and can establish a container network in a virtual network of the Neutron by using the plug-in the Kubernetes, thereby realizing network fusion of the Kubernetes and OpenStack; the invention provides a solution for realizing service in Kubernetes based on a Load bearer instance on the basis of a network plug-in, converts the service in Kubernetes into the Load bearer instance based on Octavia items of OpenStack, provides an access entrance with external stability for a back-end container, realizes front-end and back-end decoupling, and improves the reliability of the system.
The method and the device can solve the problem of fusion of network resources in an application scene of fusion of the virtual machine and the container resources, and ensure the resource throughput of the cloud platform under the condition of high concurrency.
Drawings
Fig. 1 is a startup flow of Neutron CNI.
Fig. 2 is a flow of a Neutron CNI acquisition network.
Fig. 3 is a flow chart of a Neutron CNI setup network.
Fig. 4 is a flow of a Neutron CNI delete network.
Fig. 5 is a relationship between Service and LoadBalancer.
Fig. 6 is a diagram of a container network plug-in mode of operation.
Fig. 7 is a Kubernetes update time process flow.
Fig. 8 is a tenant isolation schematic.
FIG. 9 is a configuration file for creating RepicationController resources.
Fig. 10 is the result of Pod creation.
Fig. 11 is a Pod creation result under the same tenant and host.
Fig. 12 is a Ping test result under the same tenant and host.
Fig. 13 is a Pod creation result under different tenants and hosts.
Fig. 14 shows Ping test results under different tenants and hosts.
Fig. 15 is a Pod creation result under a different host from the tenant.
Fig. 16 shows Ping test results under different hosts of the same tenant.
Fig. 17 is a service object configuration file.
Fig. 18 is a service creation result.
Fig. 19 is loadbalancer create results.
Fig. 20 is an external access Load Balancer VIP result.
Detailed Description
Example 1
The invention combines the congenital advantages of the Neutron component of OpenStack in the aspect of virtualized network and the development of Kubernetes in the aspect of container cluster arrangement management, carries out deep fusion, and provides a flexible and reliable network solution for a fusion platform of container application deployment; the method comprises the following steps:
1) Virtual machine and container network fusion
The network of Kubernetes adopts a CNI architecture, and in order to solve the problem of container network, based on the existing Neutron network service, the Kubernetes are integrated in the form of network plug-ins, and a Pod network is established by using Neutron. To meet maintenance and upgrades of system design, thus following the Kubernetes ecological specification, on the basis of the CNI network model, neutron is taken over the Kubernetes network in the form of a third party network plug-in. The invention realizes the Neutron CNI container network plug-in. Using this plug-in as a network plug-in to Kubernetes, neutron can take over the container network in the Kubernetes cluster. Starting from the creation of the Pod, apiserver sends a request for creating the Pod to a working node, kubelet of the working node calls a Neutron CNI container network plug-in, the network plug-in finds a corresponding subnet in the Neutron according to the Namespace, and allocates a corresponding port for the container on the subnet to complete the establishment of the container network.
In order to enable Kubernetes to use the Neutron container network plug-in developed in the patent correctly, when kubelet of the working node starts up, an improvement to its start-up configuration flow is needed, as shown in fig. 1, in the kubelet start-up process, under default, kubelet will find available cni plug-ins under the/opt/cni/directory, and add 10-Neutron-cni.conf files under this directory, where the following is:
{
"cniVersion":"0.1.0",
"name":"Neutron",
"type":"Neutron-cni",
"conf":"/etc/Neutron/Neutron-cni.conf",
"debug":true
}
When kubelet is started, an available CNI network plug-in is automatically searched under the condition of/opt/CNI/catalog, the sequence is that the file name is the first file from small to large according to the character string sequence, namely a 10-Neutron-cni.conf file in the system, and the conf field in the configuration file is written with a detailed configuration file path of the Neutron container network plug-in;
the CNI model of a container network requires a series of operations to provide queries, create, delete, etc. of the network. The method comprises the steps of obtaining a network, setting the network and deleting three modules of the network;
Acquiring a network:
The tenant of Kubernetes may have multiple namespaces (namespaces), each of which corresponds to a subnet (subnet) of OpenStack, as shown in fig. 2, and the interface accepts information of a pod, returns a corresponding subnet in OpenStack according to the namespaces to which the pod belongs, and designates the subnet in the information of the pod;
by calling the acquisition network interface, the subnet network information corresponding to the name space to which the pod belongs can be acquired. The data structure of the network information is shown in the following table:
Fields Information processing system
Namespace Name of namespace
SubnetID OpenStack subnet ID
Subnet All network information of a subnet
Setting a network:
The method is used for binding the pod with a subnet to which the pod belongs, acquiring Namespace data according to a name space message to which the pod belongs, returning network information for a pod creation port on the subnet, and acquiring subnet information corresponding to the Namespace to bind the pod to a designated port;
As shown in fig. 3, a port is created on a subnet corresponding to a pod, the pod is connected with the subnet, the network setting of the pod is completed, and after the setting is completed, the network information of the pod includes the IP address of the pod and the network information of the subnet corresponding to the name space to which the pod belongs, which is specifically shown in the following table:
Deleting the network:
As shown in fig. 4, the method deletes a subnet corresponding to a namespace, and needs to clean the network information bound to the subnet, such as Pod and port, before deleting the subnet; if the unclean information still exists before the deletion, the deletion cannot be continued, and the method returns failure; the network resources bound to a network are also released before it is deleted.
2) Implementing Kubernetes service discovery mechanism
According to the concepts of service and load policy, a certain similarity between the function and the structure can be found, which is also the basis for realizing the service by selecting the load policy according to the application, and the corresponding relationship between the two is shown in the following table:
Kubernetes OpenStack Description of the invention
Pod Virtual machine instance, container instance Backend providing services
Pod network Subnet Three-layer network
Serivce Load balancer Traffic forwarding and load balancing
Serivce IP Virtual IP Providing access to the outside
endpoint port Network connection
(1) The specific design is as follows:
The Neutron container network plug-in realizes the conversion from pod and service resource objects in Kubernetes to OpenStack, and in a specific design, the function of the plug-in is divided into two modules:
control module (Controller): monitoring creation, updating and deleting events about pod and service in Kubernetes, and calling a working module to perform corresponding operation in a Neutron;
work module (Neutron CNI): calling a Neutron API to finish the creation of a container network and a service;
As shown in FIG. 5, the Load bearer instance replaces the forwarding rule and specific implementation of the iptables responsible for traffic, and the method also provides a basis for subsequent functional expansion; load Balancer also provides a virtual VIP as an entry for external access services, and the role of endpoint is played by a port in a neutral, which is responsible for connection of a container to a network. The container network corresponds to a subnet in the Neutron, and compared with a direct-use docker container network, the network created by the Neutron network plug-in can perform more customization, such as a tenant isolation function;
The Neutron container network plug-in realizes the conversion from pod and service resource objects in Kubernetes to OpenStack; in a specific design, the application divides the function of the plug-in into two modules:
control module (Controller): monitoring creation, updating and deleting events about pod and service in Kubernetes, and calling a working module to perform corresponding operation in a Neutron;
work module (Neutron CNI): calling a Neutron API to finish the creation of a container network and a service;
The operational mode of the container network plug-in combination with Kubernetes and OpenStack is shown in fig. 6;
The control module monitors API-server of Kubernetes, updates Pod, deletes event, and the work module calls neutral API to complete network creation and configuration for the container; for updating of the service, deleting time, the working module can call an API of Octavia to create a Load balance, and setting a corresponding Load balancing rule according to the configuration of the service; after the Pod is successfully created by the Kubernetes, monitoring information created by the Pod through an api-server, sending the information in a POST HTTP request mode, and encapsulating all configuration information about the Pod in a json mode in a request body;
After receiving the information of the creation event, the listener extracts the container group network related configuration from the HTTP request event and puts the container group network related configuration into a shared dictionary; the Server is a conventional WSGI Server which will answer CNI driver calls, when CNI requests come, the Server is waiting for VIF objects to appear in the shared dictionary, as comments are read from Kubernetes API and added to the registry by Watcher threads, the Server will eventually get its VIF that needs to be connected to a given Pod, then wait for the VIF to become active before returning to CNI driver, after loading VIF objects from Pod object comments, the network plug-in will call the ov_vif library to perform Pod insert and pull operations, when add and pull jobs for Pod are completed, and after all network plugging is completed, control will return to kubelet, where the CNI driver will insert Pod with the port initially created by Neutron in "stopped" state, but must change the VIF state of Pod comments to "active" before returning control to caller;
(2) Start-up procedure
When the system is started, the control module needs to be authenticated by OpenStack and Kubernetes respectively to establish communication with the platform, and then the API works normally;
Firstly, a POST request is sent to OpenStack IDENTITY API to request to authenticate token, where IDENTITY API is actually a key authentication interface, a user name, a password and a domain to which the user belongs need to be provided in the request body,
If the authentication is successful, a 200OK response message is obtained, wherein the response body comprises a token and an expiration time, the former is in the format of "id": token ", and the latter is in the format of" expires ":" datetime "; after the token is obtained, a request can be sent to a service endpoint, and the token can be authenticated by a keytoken added in a request header, wherein the token only controls the user to enter the service, but the head content of the request is as follows, regardless of the specific operation type of the user in the service:
After successful authentication and link establishment through Kubernetes, authorization is performed for a control module of the container network in the Kubernetes cluster, and in the authorization link, a specific access object is a path corresponding to various attributes of KubernetesAPI, including: the user, group, path (e.g./api/v 1,/healthz,/version, etc.), and specific request action type (e.g.: get, list, create, etc.), APISERVER will compare these attribute values to a pre-configured access policy (access policy), APISERVER will support multiple authentication modes, including Node, RBAC, webhook, etc.;
APISERVER, when starting, an authentication mode can be designated, and multiple modes can be designated, if the authentication mode is the latter, the final result of the link is that the authentication is successful as long as the request received by the API passes the authorization of one mode; in apiserver initial configuration of the Kubernetes cluster started in the application, default configuration of authentication mode is "Node, RBAC", node authorizers are mainly used when kubelet on each working Node accesses apiserver, and other are generally authorized by the RBAC authorizers;
(3) Workflow process
After the startup and authentication are successful, the control module continuously monitors the api-server of Kubernetes, the monitored content is the change condition of pod and service resources, including the production of new objects and the dynamic change of existing objects, and for each new Kubernetes event received by the monitor, the control module generates a separate EventHandler class for processing, and the EventHandler has a unified interface as shown in the following table:
Method name Description of the invention
async() Asynchronous call Neutron CNI
retry() Retry on failure
logException() Log interface
async():
The EventHandler calls an API interface in an asynchronous mode, all threads are submitted to a task queue firstly, and a thread pool takes out tasks from the queue for distributing thread processing after the submitted threads; there are separate task queues and thread pools for both resources, pod and service. The method has the advantages that when a large number of concurrent requests are processed, a large number of thread calls and inter-thread switching cannot be caused, and consumption of system resources is reduced;
retry():
In the asynchronous processing process, if the returned result of calling the API is failure or the API calls overtime, calling the retry interface can firstly rollback all changes before failure, restarting the processing of the task after rollback is completed, and stopping retry after retrying for more than a certain number of times, and reporting task failure information;
logException():
processing when the task fails;
When a Kubernetes event arrives, firstly judging the type of a resource object corresponding to the event, respectively submitting the type of the resource object to a corresponding task queue, taking out the task from the task queue to be distributed to the idle thread when the idle thread exists in the thread pool, and finally returning a processing result;
2. Tenant network isolation
Because the system designed by the invention is based on the OpenStack platform, and the network bridge functions of the Neutron component, the base Openv-vSwitch and the Linux kernel of the OpenStack form a virtual SDN with rich functions from a two-layer network and a three-layer network, after a container is introduced, the construction and user isolation of the container network can be realized by means of the existing functional support of the OpenStack platform, and a whole set of virtual network equipment can be provided for each virtual network mainly by means of Network Namespace mechanism adopted by the Neutron network bottom layer, and the virtual network equipment is considered to be network resources which independently occupy the whole physical machine, so that a set of independent virtual network environment can be provided for each user;
In the system, the network of the Kubernetes is responsible by a Neutron, the tenant corresponds to the network in the Neutron, the namespace under the tenant corresponds to the subnet under the network in the Neutron, and based on the rule, the network isolation among the tenants can be realized;
By default, namespaces under the same tenant are in different subnets, so that the namespaces are isolated by three layers and are invisible to each other; if the pod of different namespaces under the same tenant can access each other, the communication between the different namespaces can be realized only by configuring the routing rule between the subnets.
Example 2
1. Experimental environment
The container network plug-in is based on a mixed working scene of a virtual machine and a container, so that a Kubernetes platform and an OpenStack platform are respectively built first. The experiment is carried out on three physical machines, and specific roles are divided into the following tables:
The controller is used as a control node of the OpenStack, the computer is used as a computing node, and the other server is used as a master node of the Kubernetes cluster;
1. Providing a container network using a Neutron network plug-in
Kubernetes uses TLS certificates for identity authentication and authorization, and above all needs to configure Kubernetes certificate information, with the following parameters:
[kubernetes]
api_root=https://10.10.87.63:6443
ssl_ca_crt_file=/etc/Neutron/ca.crt
ssl_verify_server_crt=True
ssl_client_crt_file=/etc/Neutron/kubelet-client.crt
ssl_client_key_file=/etc/Neutron/kubelet-client.key
after API SERVER of Kubernetes can be successfully monitored, openStack authentication is also required to be passed to call an OpenStack API, and parameters are as follows:
[Neutron]
auth_uri=http://10.10.87.61:5000
auth_url=http://10.10.87.61:35357
project_domain_name=Default
project_name=service
user_domain_name=Default
project_domain_id=default
user_domain_id=default
auth_url specifies the authentication API of OpenStack and auth_url specifies the call API of Neutron. After the configuration is successful, a monitoring module of the container network plug-in can be started;
Creating a container group in Kubernetes requires first creating RepicationController a resource that creates a corresponding number of container groups from the defined number of copies. Creating RepicationController a configuration file of the resource as shown in fig. 9;
the results in fig. 10 verify the creation of a container group and the assignment of the container group IP address;
2. tenant isolation verification
Verifying network connectivity, testing by adopting a PING method, initiating a PING test from the A container group to the B container group, if the PING test is reachable, considering the network to be connected, and if the PING test is not reachable, considering the network to be isolated and invisible; the experiment is verified from the same tenant with host, different tenants with host and different hosts with tenant with host respectively; verifying network communication between the same tenant and network isolation between different tenants;
fig. 11, fig. 12 show test results under the same tenant and host machine, and the test results show that the same tenant and host machine lower container group can communicate with each other;
Fig. 13 and fig. 14 show test results under different tenants and a host machine, and the test results show that the container groups under different tenants cannot communicate with each other, and networks among the tenants are isolated from each other;
Fig. 15, fig. 16 shows test results under different hosts of the same tenant, and the test results show that the container groups under different hosts of the same tenant can communicate with each other;
3. Load balancing instance verification
After the container group is externally issued as a service in Kubernetes, a control module monitoring API SERVER receives event information created by the service, reads information such as endpoint, pod and the like in the service, calls OpenStack API, creates a corresponding resource, implements a service resource object with Load bearer instance, the functions of the flow are as follows,
Firstly, a discover object is created and published as a service object, and the configuration is shown in fig. 17, the discover object is responsible for creating a container group with the copy number of 2, the labels of the containers are named as app-to-nginx, in the definition of the service, all containers with the labels of app-to-ginx are selected, 80 ports of the containers are mapped with 80 ports of SERVICE IP, and traffic accessing < SERVICE IP > in which the 80 ports is forwarded to a back-end container. The creation result is shown in fig. 18;
The service object with Cluster IP of 10.2.247.122 is created, meanwhile, the container network plug-in control module monitors the event information created by the service, calls OpenStackAPI and creates a Load bearer instance, and the creation result is shown in FIG. 19;
After the Load Balancer instance is created, the result shows that the VIP of the Load Balancer instance is consistent with SERVICE IP in Kubernetes, and 10.2.247.122 ensures that the Load Balancer can provide service services outwards;
The results shown in fig. 20 indicate that the service exposed virtual SERVICE IP can be accessed from the outside.

Claims (1)

1. A Kubernetes network plug-in method converged with OpenStackNeutron, characterized by providing a network solution for a converged platform of container application deployment by combining a Neutron component of OpenStack and container cluster orchestration management of Kubernetes, comprising:
(1) Network fusion of virtual machines and containers
For a container created by OpenStack, kubernetes takes over and publishes a container application, and the fusion of the two realizes container scheduling;
(2) Implementing Kubernetes service discovery mechanism
Creating Kubernetes service services based on the OpenStack load balancing instance, and providing the service discovery function outwards;
(3) Tenant isolation
The Pods under the same tenant access each other, and the Pods under different tenants are invisible to each other, so that the isolation of the platform network is ensured;
(4) Optimization of load balancing strategy
Based on the load of the back-end container, a dynamic load balancing strategy is realized;
in the network fusion of the virtual machine and the container, a Neutron container network plug-in realizes the conversion from pod and service resource objects in Kubernetes to OpenStack; the plug-in function is divided into two modules:
and the control module is used for: monitoring creation, updating and deleting events about pod and service in Kubernetes, and calling a working module to perform corresponding operation in a Neutron;
the working module is as follows: call NeutronAPI, finish the creation of container network and service;
in the Kubernetes service discovery mechanism, loadBalancer examples replace iptables to take charge of forwarding rules and specific realization of traffic, and the mode provides a basis for subsequent function expansion; loadBalancer provide a virtual VIP as an entry for external access services; the role of an endpoint is played by a port in a Neutron and is responsible for connecting a container with a network;
In tenant isolation, a virtual SDN with rich functions from two layers and three layers is formed by utilizing a Neutron component of an OpenStack platform based on network bridge functions of an OpenvSwitch and a Linux kernel, and a whole set of virtual network equipment is provided for each virtual network by means of Network Namespace mechanisms adopted by a Neutron network bottom layer, so that a set of independent virtual network environment is provided for each user;
in the optimization of the load balancing strategy, the load balancing strategy for dynamically planning the flow forwarding rule according to the real-time load of the back-end system is adopted.
CN202010561627.XA 2020-06-18 2020-06-18 Kubernetes network plug-in method fused with OpenStack Neutron Active CN113821268B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010561627.XA CN113821268B (en) 2020-06-18 2020-06-18 Kubernetes network plug-in method fused with OpenStack Neutron

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010561627.XA CN113821268B (en) 2020-06-18 2020-06-18 Kubernetes network plug-in method fused with OpenStack Neutron

Publications (2)

Publication Number Publication Date
CN113821268A CN113821268A (en) 2021-12-21
CN113821268B true CN113821268B (en) 2024-06-04

Family

ID=78924352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010561627.XA Active CN113821268B (en) 2020-06-18 2020-06-18 Kubernetes network plug-in method fused with OpenStack Neutron

Country Status (1)

Country Link
CN (1) CN113821268B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220114157A1 (en) * 2020-10-12 2022-04-14 Oracle International Corporation Lock management for distributed application pods
CN114553874B (en) * 2022-02-28 2023-04-18 北京理工大学 Hybrid simulation cloud platform and automatic deployment method
CN115250197B (en) * 2022-06-02 2024-04-12 苏州思萃工业互联网技术研究所有限公司 Device for automatically creating container discovery service
CN115334018A (en) * 2022-08-12 2022-11-11 太保科技有限公司 Openstack-based container control method and device for IaaS cloud architecture and container
CN116055082B (en) * 2022-08-17 2023-11-28 广东德尔智慧科技股份有限公司 User management method and system based on OpenStack

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106953848A (en) * 2017-02-28 2017-07-14 浙江工商大学 A kind of software defined network implementation method based on ForCES
CN108418705A (en) * 2018-01-29 2018-08-17 山东汇贸电子口岸有限公司 Virtual machine mixes the virtual network management method and system of nested framework with container
CN108989091A (en) * 2018-06-22 2018-12-11 杭州才云科技有限公司 Based on the tenant network partition method of Kubernetes network, storage medium, electronic equipment
CN109254831A (en) * 2018-09-06 2019-01-22 山东师范大学 Virtual machine network method for managing security based on cloud management platform
CN109962940A (en) * 2017-12-14 2019-07-02 北京云基数技术有限公司 A kind of virtualization example scheduling system and dispatching method based on cloud platform
CN110198231A (en) * 2018-05-08 2019-09-03 腾讯科技(深圳)有限公司 Capacitor network management method and system and middleware for multi-tenant

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10996972B2 (en) * 2018-09-25 2021-05-04 Microsoft Technology Licensing, Llc Multi-tenant support on virtual machines in cloud computing networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106953848A (en) * 2017-02-28 2017-07-14 浙江工商大学 A kind of software defined network implementation method based on ForCES
CN109962940A (en) * 2017-12-14 2019-07-02 北京云基数技术有限公司 A kind of virtualization example scheduling system and dispatching method based on cloud platform
CN108418705A (en) * 2018-01-29 2018-08-17 山东汇贸电子口岸有限公司 Virtual machine mixes the virtual network management method and system of nested framework with container
CN110198231A (en) * 2018-05-08 2019-09-03 腾讯科技(深圳)有限公司 Capacitor network management method and system and middleware for multi-tenant
CN108989091A (en) * 2018-06-22 2018-12-11 杭州才云科技有限公司 Based on the tenant network partition method of Kubernetes network, storage medium, electronic equipment
CN109254831A (en) * 2018-09-06 2019-01-22 山东师范大学 Virtual machine network method for managing security based on cloud management platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于OpenStack和Kubernetes的双向部署技术研究;杜磊;;电脑知识与技术(第01期);全文 *

Also Published As

Publication number Publication date
CN113821268A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113821268B (en) Kubernetes network plug-in method fused with OpenStack Neutron
CN111522628B (en) Kubernetes cluster building deployment method, framework and storage medium based on OpenStack
CN107181808B (en) Private cloud system and operation method
JP7085565B2 (en) Intelligent thread management across isolated network stacks
US9307017B2 (en) Member-oriented hybrid cloud operating system architecture and communication method thereof
US7246174B2 (en) Method and system for accessing and managing virtual machines
JP4422606B2 (en) Distributed application server and method for implementing distributed functions
CN112214338A (en) Internet of things cloud platform based on flexible deployment of micro-services
US6934952B2 (en) Method and apparatus for managing multiple instances of server code on a machine
JP2021518018A (en) Function portability for service hubs with function checkpoints
US8082344B2 (en) Transaction manager virtualization
CN111857873A (en) Method for realizing cloud native container network
CN112256399B (en) Docker-based Jupitter Lab multi-user remote development method and system
EP2815346A1 (en) Coordination of processes in cloud computing environments
CN114070822B (en) Kubernetes Overlay IP address management method
US10761869B2 (en) Cloud platform construction method and cloud platform storing image files in storage backend cluster according to image file type
CN106790084A (en) A kind of heterogeneous resource integrated framework and its integrated approach based on ICE middlewares
CN112099913A (en) Method for realizing safety isolation of virtual machine based on OpenStack
WO2024088217A1 (en) Private network access methods and system
He et al. Research on architecture of internet of things platform based on service mesh
JP4976128B2 (en) Transparent session transport between servers
CN111488248A (en) Control method, device and equipment for hosting private cloud system and storage medium
CN114745377A (en) Edge cloud cluster service system and implementation method
JP2007507762A (en) Transparent server-to-server transport of stateless sessions
CN115378993B (en) Method and system for supporting namespace-aware service registration and discovery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant