CN115086166A - Computing system, container network configuration method, and storage medium - Google Patents

Computing system, container network configuration method, and storage medium Download PDF

Info

Publication number
CN115086166A
CN115086166A CN202210557898.7A CN202210557898A CN115086166A CN 115086166 A CN115086166 A CN 115086166A CN 202210557898 A CN202210557898 A CN 202210557898A CN 115086166 A CN115086166 A CN 115086166A
Authority
CN
China
Prior art keywords
network service
network
container
component
container instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210557898.7A
Other languages
Chinese (zh)
Other versions
CN115086166B (en
Inventor
鲁金达
侯志远
邬宗勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210557898.7A priority Critical patent/CN115086166B/en
Publication of CN115086166A publication Critical patent/CN115086166A/en
Application granted granted Critical
Publication of CN115086166B publication Critical patent/CN115086166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting

Abstract

The embodiment of the application provides a computing system, a container network configuration method and a storage medium. In the embodiment of the application, by utilizing the capability that the network service in the network service cluster can open different networks for the container, a Container Network Interface (CNI) component corresponding to the network service cluster is added in the working node of the computing cluster, and the CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instances deployed in the working nodes to the network service process, can occur at any stage of the life cycle of the container instances, realizes the decoupling of the container network configuration and the life cycle of the container, and is beneficial to improving the flexibility of the container network configuration.

Description

Computing system, container network configuration method, and storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a computing system, a container network configuration method, and a storage medium.
Background
The server virtualization technology is a key technology based on an infrastructure layer in cloud computing. The technology realizes the deployment of a plurality of Virtual Machines (VMs) on a single physical Machine by virtualizing the physical server. In order to improve the resource utilization rate of the server and reduce the use cost, a computing cluster combines a plurality of virtual machines or physical machines into an organic whole for unified management, physical resources are abstracted into a resource pool consisting of various resources such as storage, computation, network and the like through a virtualization technology, and the resource pool is provided for users in a mode of applying for the resources as required.
In practical application, in order to implement resource management of a computing cluster, a Kubernetes (K8 s for short) control plane program needs to be deployed in a central cloud, computing nodes are firstly uniformly taken over to a K8s control plane, and then application containers are deployed on resources accessed to K8s to provide cloud computing services for users.
Application containers have varying requirements for the network. Currently, K8s uses a Container Network Interface (CNI) as a Container Network configuration Interface to perform Container Network configuration. When the Kubelet component in each working node in the K8s starts the container, the ADD interface of the CNI is called to ADD a network for the container. Before the Pod is destroyed by the Kubelet component, the DEL interface of the CNI is called to delete the container from the network. ADD and DEL interfaces are only invoked when Pod is initiated and destroyed, and therefore, the current container network configuration approach is tightly coupled to the container lifecycle, and the container network configuration is less flexible. For example, the network properties of the container cannot be changed dynamically while the container is running, etc.
Disclosure of Invention
Aspects of the present disclosure provide a computing system, a container network configuration method, and a storage medium, which are used to implement decoupling of a container network configuration process and a container lifecycle, and contribute to improving flexibility of container network configuration.
An embodiment of the present application provides a computing system, including: the system comprises a control node, a computing cluster and a network service cluster; the computing cluster comprises a plurality of worker nodes; the network service cluster is used for deploying network services;
the work node includes: a Container Network Interface (CNI) component corresponding to the network service;
the CNI component is used for accessing the container instances deployed in the working nodes to the network service;
the container instance accesses other networks through the web service.
An embodiment of the present application further provides a container network configuration method, including:
determining a container instance deployed by a target working node;
and accessing the container instance to a network service of the network service cluster by utilizing a CNI component corresponding to the network service cluster in the target working node, so that the container instance can access other networks through the network service.
Embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the above container network configuration method.
In the embodiment of the application, by utilizing the capability that the network service in the network service cluster can open different networks for the container, a Container Network Interface (CNI) component corresponding to the network service cluster is added in the working node of the computing cluster, and the CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instances deployed in the working nodes to the network service process, can occur at any stage of the life cycle of the container instances, realizes the decoupling of the container network configuration and the life cycle of the container, and is beneficial to improving the flexibility of the container network configuration.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic diagram of a container network configuration process performed by some container network configuration methods provided by an open source scheme;
FIG. 2 is a schematic structural diagram of a computing system according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a container network configuration provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a process of allocating a virtual network card to a container according to the embodiment of the present application;
FIG. 5 is an architectural thinking diagram of a computing system provided by an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a network service creation process provided in an embodiment of the present application;
fig. 7 is a flowchart illustrating a network configuration method according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In some open-source CNI plugins (e.g., Multus-CNI plugins), multiple other CNI plugins may be invoked, enabling container instances (e.g., pod) to belong to multiple networks (networks) simultaneously. Fig. 1 shows some container network configurations provided by the open source scheme. As shown in fig. 1, the container network configuration method mainly includes the following steps:
step 1: the node proxy component (e.g., Kubelet in the K8s system) runs the container.
Step 2: the Container runtime (Container runtime) component calls a network plug-in (network plug) to perform Container network configuration (Setup pod).
And step 3: the CNI plug-in authorizes the CNI ADD operation (delete ADD).
And 4, step 4: the master CNI plug-in (master plug-in) performs the CNI ADD operation, configuring the network 1 for the container.
And 5: the Multus plugin authorizes CNI ADD operations (delete ADD).
Step 6: the CNI ADD operation is performed from the plug-in (mini plugin), configuring the virtual network 2 for the container.
The container network configuration scheme shown in fig. 1 may enable containers to simultaneously attribute multiple networks. However, the container network configuration process is still strongly coupled with the container start-up process, i.e. the network attributes of the container are determined during the container configuration (Setup) phase of the container start-up process, and cannot be modified in time when the container is running. The application of the method needs to perform refined routing control among a plurality of networks, and an application developer needs to realize the routing control in a container (such as Pod) in a code or script mode, so that the development complexity is increased. In addition, the CNI implementation needs to consider idempotency of CNI ADD operations, and if a Pod wants to have multiple networks, different CNI implementations need to be invoked. Different CNIs may be followed by completely different network implementations, and if the user needs to access different networks under the same network implementation, the above container network configuration cannot be implemented. To implement different networks under the same network implementation, the specific CNI implementation needs to be modified, which undoubtedly increases the complexity of development.
In some embodiments of the present application, by utilizing the capability that a network service in a network service cluster can open different networks for a container, a Container Network Interface (CNI) component corresponding to the network service cluster is added in a working node of a computing cluster, and the CNI component can access a container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instances deployed in the working nodes to the network service, can occur at any stage of the life cycle of the container instances, realizes the decoupling of the container network configuration and the life cycle of the containers, and is beneficial to improving the flexibility of the container network configuration.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
It should be noted that: like reference numerals refer to like objects in the following figures and embodiments, and thus, once an object is defined in one figure or embodiment, further discussion thereof is not required in subsequent figures and embodiments.
Fig. 2 is a schematic structural diagram of a computing system according to an embodiment of the present application. As shown in fig. 2, the computing system provided in the embodiment of the present application mainly includes: a Master node 10, a computing cluster 20, and a network service cluster 30. As shown in fig. 2, the computing cluster 20 refers to a computing cluster composed of a plurality of working nodes (workers) 201. The network service cluster 30 may also be comprised of a plurality of worker nodes (workers) 301.
In this embodiment, the management and control node 10 refers to a computer device that can perform work node management, respond to a service request of the user terminal 30, and provide a computing service for a user by scheduling the work node 201, and generally has the capability of undertaking and guaranteeing the service. The management node 10 may be a single server device, or may be a cloud server array, or a Virtual Machine (VM), a container, or a container group running in the cloud server array. In addition, the server device may also refer to other computing devices with corresponding service capabilities, for example, a terminal device (running a service program) such as a computer. In this embodiment, the management node 10 may be deployed in a cloud, for example, in a central cloud of an edge cloud system.
A worker node refers to a computer device that provides computing resources. The working node can be a physical machine or a virtual machine virtualized in the physical machine. In this embodiment, the worker node may provide other hardware resources and software resources in addition to the computing resources. Among them, the hardware resources may include: computing resources such as processors, and storage resources such as memory and disks. The processor may be a CPU, a GPU, an FPGA, or the like. The software resources may include: network resources such as bandwidth, network segment, network card configuration and the like, an operating system and the like.
In this embodiment, the working node may be deployed in a central cloud, and may also be implemented as an edge cloud node in an edge cloud network. An edge node may be a computer room, a Data Center (DC), or an Internet Data Center (IDC), etc. For an edge cloud network, a worker node may include one or more edge nodes. Plural means 2 or more. Each edge node may include a series of edge infrastructures including, but not limited to: a distributed Data Center (DC), a wireless room or cluster, an edge device such as a communication network of an operator, a core network device, a base station, an edge gateway, a home gateway, a computing device or a storage device, a corresponding network environment, and the like. It is noted that the location, capabilities, and infrastructure involved of the various edge nodes may or may not be the same.
In this embodiment, the working nodes 301 of the Network Service cluster 30 are mainly used for deploying Network services (Network services), such as the Network services 1-3 in fig. 2. In embodiments of the present application, the network service cluster 30 is used to implement a gateway orchestration service, which may provide an infrastructure layer that handles communication between different networks. The network service cluster 30 may provide the container accessing the network service cluster 30 with the ability to access other networks. A network service is a gateway abstraction for accessing a network. The network service can be realized by various gateways, such as a VPC gateway, and can be communicated into another VPC. In some embodiments, the Network Service cluster 30 may be implemented as a Network Service Mesh (NSM) cluster. A network services grid (NSM) is a service grid that provides network services, and may be an infrastructure layer that handles communication between different networks.
In the embodiment of the present application, the management and control nodes 10 corresponding to the network service cluster 30 and the computing cluster 20 may be the same master node, or may be different master nodes. Fig. 1 illustrates only the management node 10 corresponding to the network service cluster 30 and the computing cluster 20 as the same master node, but the present invention is not limited thereto.
In the embodiment of the present application, the governing node 10 and the working nodes 201 and 301 may be connected wirelessly or through wires. Optionally, the management and control node 10 and the working nodes 201 and 301 may be communicatively connected through a mobile network, and accordingly, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), 5G, WiMax, and the like. Optionally, the management node 10 and the working nodes 201 and 301 may also be connected by bluetooth, WiFi, infrared, or the like. Different working nodes 201 in the computing cluster 20 may also be connected through intranet communication, and the like. Of course, different working nodes 301 in the network service cluster 30 may also be connected through intranet communication, and so on.
In this embodiment of the present application, in response to the container creation request, the management and control node 10 may schedule, in the work node 201, a target work node adapted to the container creation request, and bind the container to be created with the work node. When the node agent component 20a in the target working node (for example, the Kubelet component in K8s) listens that the container is bound, the container can be created and started by the docker (for example, Pod, etc.), so that the container can be deployed in the target working node.
In order to implement container network configuration and container lifecycle decoupling, in the embodiment of the present application, a CNI (NSM-CNI for short) component 20b corresponding to a network service cluster is set in a working node 201 in a computing cluster 20 by using the capability of a network service to open different networks for a container. The NSM-CNI component 20b may be implemented as a CNI plug-in. The CNI plug-in is a CNI plug-in corresponding to a network service cluster, and may be referred to as NSM-CNI component for short. The NSM-CNI component 20b is an executable program. In the embodiments of the present application, the CNI interface refers to a call to an executable program. The executable program is called a CNI plug-in. In the embodiment of the present application, the NSM-CNI component 20b may be deployed in the working node 201 in a depolymenent form. As shown in FIG. 4, the N NSM-CNI component 20b may be deployed in a CNI chain (CNI chain) manner at the working node 201. The CNI chain approach allows the NSM-CNI component 20b to be used in conjunction with other CNI plug-ins. Among these, the CNI configuration example of the NSM-CNI component 20b is as follows:
data structure 1: CNI configuration of NSM-CNI component 20b
Figure BDA0003653017330000041
Figure BDA0003653017330000051
In the embodiment of the present application, when a container network needs to be set, the NSM-CNI component 20b may access a container instance (e.g., Pod, etc.) deployed in the working node 201 to a network service in the network service cluster 30. In embodiments of the present application, the container instance may be implemented in the form of a container group, such as Pod. Wherein a container group may comprise one or more containers. Plural means 2 or more. Because the web service is a gateway abstraction that accesses the network, container instances may access other networks through the web service. The other network refers to a network other than the internal network of the computing cluster 20, and is mainly used for realizing the access of the container instance in the computing cluster 20 to the nodes in the other network.
Specifically, in conjunction with fig. 3, 4, and 5, the NSM-CNI component 20b may allocate a virtual network card (e.g., NSM0 in fig. 3-5) to the container instance during the process of deploying the container instance by the working node 201. The virtual network card is a virtual network that communicates with the network service cluster 30. For embodiments in which the network service cluster is an NSM cluster, the virtual network card may be referred to as an NSM virtual network card. Nsm0 in fig. 3-5 represents the name of the virtual network card.
Optionally, in conjunction with fig. 3 and fig. 4, during the Container instance deployment process of the working node 201, the node agent component (e.g., Kubelet)20a may call the Container Runtime component (Container Runtime)20d, and the Container Runtime component 20d may call the NSM-CNI component 20b to allocate the NSM virtual network card NSM0 for the Container instance (e.g., Pod). Specifically, during the container configuration (Setup Pod) phase, the container runtime component 20d may invoke the NSM-CNI component 20b to execute the CNI ADD interface, assigning the container instance an NSM virtual network card NSM 0.
Of course, in some embodiments, to enable communication of container instances in a computing cluster, the container runtime component 20d may also invoke other CNI plug-ins (e.g., CNI plug-in 20f shown in fig. 3 and 4) to allocate network cards for communication within the computing cluster for the container instances. Specifically, in the container configuration (Setup Pod) phase, the container runtime component 20d may invoke other CNI plug-ins to execute the CNI ADD interface, and allocate network cards (such as cluster network cards) for computing intra-cluster communication for the container instance.
In some embodiments, in order to avoid some container instances that do not require NSM being assigned an NSM virtual network card, NSM-CNI component 20b only notes (indications) in container resources (e.g., Pod resources) that the NSM virtual network card needs to be assigned to the container instance that uses NSM. For example, the nsm.closed.com/ansm-cni field of the comment field of the container resource indicates whether the container uses NSM. If the value of the NSM, closed, com/ansm-cni field is set to true, this indicates that the container instance uses NSM. Accordingly, if the value of the nsm.closed.com/ansm-cni field is set to false, this indicates that the container instance does not use NSM.
In this embodiment of the present application, as shown in fig. 4, a container network may be opened by a network namespace (networks) tap interface and a Traffic Control (TC) policy, and before creating a sandbox (sandbox) environment, a network namespace is first created, where the network namespace has a veth-pair network interface and a tap network interface. eth0 and nsm0 belong to a path-pair type interface, one end is accessed to the network namespace created by cni, and the other end is accessed to the host. tap0_ kata and tap1_ kata belong to tap type interfaces, one end is accessed to the network namespace created by CNI, one end is accessed to the hypervisor created by qemu, and the network namespace created by the CNI plug-in 20f is communicated with the eth0 network interface and the tap0_ kata network interface by using TC policies, which is equivalent to communicating the eth0 and the tap0_ kata. Of course, the network namespace created by the NSM-CNI component 20b uses TC policies to open NSM0 network interfaces and tap1_ kata network interfaces, which is equivalent to connecting the two network interfaces NSM0 and tap1_ kata.
In the Sandbox environment, only eth0 and NSM0 network interfaces are provided, which are interfaces modeled by qemu and tap, and the MAC address, the IP address and the mask are respectively configured in the same eth0 and NSM0 in the network namespace created by the CNI plug-in (the CNI plug-in 20f and the NSM-CNI component 20b) in the host.
After the NSM-CNI component 20b allocates the NSM virtual network card (i.e., NSM0 network interface) for the container instance, it may also persist the name of the container instance, the namespace of the container instance, the network namespace of the container instance, and the sandbox environment of the container instance in the storage medium corresponding to the worker node 201, so as to perform subsequent network flow table update on the container instance.
In the embodiment of the present application, for the network service cluster, the working node 201 is provided with a data plane component 20c corresponding to the network service cluster. For NSM, the data plane component 20c may also be referred to as an NSM data plane component. The data plane component 20c is a logic function component, which can implement the full capability of the gateway using a programming language (e.g., Go language). The data plane component is responsible for opening an Overlay link between the network service cluster 30 and the computing cluster 20, so that load balance of the Overlay link can be realized, high availability of the link is ensured, and the Overlay link has end-to-end detection capability.
Based on the data plane component 20c described above, the NSM-CNI component 20b may take over an NSM virtual network card (e.g., NSM0 in fig. 3) to the data plane component 20 c. Further, the data plane component 20c may establish a connection between the NSM virtual network card (NSM0) and a network service. In particular, network communications may be established between data plane component 20c and data plane component 30a on worker node 301 of network service cluster 30. Data plane component 30a functions the same as data plane component 20 c. For the data plane component 30a, the backend service is a network service, and the access request of the container instance in the working node 201 can be forwarded to the network service, so as to access other networks through the network service. The network configuration process moves the network configuration capabilities of the container out of the container's lifecycle so that the container's network namespace (net namespace) is unaware of container network changes.
In the computing system provided by the embodiment of the application, by using the capability that the network service in the network service cluster can open different networks for the container, a container network interface component (CNI component) corresponding to the network service cluster is added in the working node of the computing cluster, and the CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The CNI component can access the container instances deployed in the working nodes to the network service process, can occur at any stage of the life cycle of the container instances, realizes the decoupling of the container network configuration and the life cycle of the container, and is beneficial to improving the flexibility of the container network configuration. For example, the container instance is in a Runtime (Runtime) state, network properties of the container may also be dynamically changed, and so on.
In the embodiment of the application, when the container instance runs, if some or a certain network needs to be accessed, only the binding condition of the container and the network rule needs to be updated to the Custom Resource (CRD) of the computing system. For the K8s system, CRD is a way for K8s to improve scalability and let developers customize resources. CRD resources can be dynamically registered in a K8s cluster, and after the registration is finished, a client api of the kube-apiserver can be called to access the self-defined resource object. CRD is a definition of resources that requires a controller to listen to various events of the CRD to add custom processing logic. In the embodiment of the present application, in order to monitor the network demand resource of the container instance, a management control component 10a corresponding to the network service cluster is added to the management control node 10. The management and control component 10a is a logic function component, and mainly implements monitoring of network demand resources of the container instance, and adding a customized network demand processing logic when monitoring an update or creation event of a CRD for the network demand resources. The operational logic of the pipe control assembly 10a is now illustratively described.
With reference to fig. 2 and fig. 4, for the management end 40 of the computing system, when the administrator of the computing system needs to configure the network of the container instance, the administrator of the computing system may register a network service rule (network service role) resource to the API service (API server) component 10b in the form of CRD. The API service component 10b is a service end for adding, deleting, checking, changing, staring (monitoring) of resource objects in the K8s system. The data is stored in the etcd database, and the API service component 10b may perform a series of functions such as authentication, caching, API version adaptation and conversion of the data stored in the etcd database. Other modules in the governing node 10 may query or modify the data in the etcd through the API service component 10 b. The etcd database is a distributed, highly available and consistent Key-Value (Key Value pair) storage database, and is mainly used for shared configuration and service discovery.
For the network service rule resource, the network service rule resource may include: a container select (Pod selector) field, a routing field, etc. Wherein the container selection field is used for determining a container applicable to the network service rule resource; the routing field is used to determine the network information applicable to the container of the network service rule resource. With reference to fig. 5, an example CRD of a network service rule resource is as follows:
data structure 2: CRD example of web service rule resource:
Figure BDA0003653017330000071
Figure BDA0003653017330000081
in the CRD example of the web service rule resource, apiVersion represents version information of a resource object defined by the CRD; and king represents that the resource object type defined by the CRD is the network service rule resource. The generation indicates the current version of the CRD of the network service rule resource, and the field is updated every time the CRD of the network service rule resource is updated. The spec field represents a resource manifest of the resource object defined by the CRD. Wherein, the container selection field (podSelector) in the spec field: indicating the Pod range of the NetworkServiceRole adaptation. Routing field routes: each of the routes fields will determine the network trend within the Pod. The Target field indicates the access destination decided by the network service rule resource. The Target field may be represented by an IP/MASK (IP/MASK), for example, a host clear route (MASK 32) may be written, or a network address (MASK <32) may be written. Alternatively, the Target field may be indicated by a built-in network address. The built-in network address can be configured by the K8s administrator to meet the application scenario. The Via field decides how to go to the access destination. Among them, the value of the type (type) field in the Via field is optional. The value of the type (type) field may be: NetworkService, i.e., accessing a destination through NetworkService; alternatively, Host, i.e., accessing the destination through the Host network. A value field in the Via field, wherein when the type is Network service, the value is the name of the Network service; for other types, the value field may not be filled in.
In the CRD example of the network service rule resource, the status field counts the application state of the routes rule in the Pods; wherein the observedGeneration indicates the resource version number described by the current status. totalCount represents the total number of pods that podSelector matches. readyCount represents the number of Pod that have successfully flushed the network service rule. The advertising is the interaction protocol between kubernets administrator and the NSM governing component 10 a. Each time the routing field route in the network service rule resource is modified, the value of nsm. After the NSM management and control component 10a finishes processing the network service rule resource, the value of NSM. The value of nsm.accepted.com/ready is false, which indicates that the network service rule resource is not refreshed to the Pod selected by the podSelector; true indicates that the network service rule resource refresh to the Pod selected by the podSelector is all completed. As shown in fig. 5, the value of each field in the resource list (Spec) in the network service rule resource may be specified by a user, specifically, by a user on the management side 40.
Based on the network service rule resource, the management and control component 10a may monitor the network service rule resource. Specifically, the administration component 10a may invoke an API service (API Server) component 10b to monitor the web service rule resource; and under the condition that a new Network Service rule resource is monitored, generating a flow table reflecting the Network requirement of the container instance, namely a Network Service flow table (Network Service Flows) according to the new Network Service rule resource. In the embodiment of the present application, the new network service rule resources include: the API service component 10b adds a network service rule resource that does not exist originally, and may also include: the network service rule resource in the API service component 10b is updated.
Specifically, the management and control component 10a may determine a target container instance for the network service rule resource adaptation according to a container selection rule in the network service rule resource. Namely, the target container instance adapted to the network service rule resource is determined according to the value of the podSelector field in the network service rule resource. Wherein, the number of the target container instances can be 1 or more. Plural means 2 or more. Multiple container instances may be deployed at the same worker node or at different worker nodes. The container selection rules in the network service rules resource may be represented in labels (labels) of a group of containers (e.g., pod). The labels of the container groups are used to screen out the container groups having the labels.
Further, the policing component 10a may obtain the network resource of the target container instance, i.e. the value of the Routes field, from the resource request of the network service rule resource. The NSM policing component 10a may determine that the network resource of the target container instance is a network resource in the Flow table of the target container instance, thereby obtaining a Flow table (Flow) of the target container instance.
In the embodiment of the present application, the flow table of the container instance may be registered as a CRD manner to the API service component 10 b. The network resources in the flow table of the target container instance may reflect the network requirements of the target container instance. The CRD implementation of the flow table is illustrated in connection with fig. 5:
data structure 3: CRD example of flow table:
Figure BDA0003653017330000091
Figure BDA0003653017330000101
in the CRD of the above flow table, a status field is used to describe the status of the applications of routes in the flow table in the Pods. The Phase field indicates the application Phase of the network service rule resource in the Pod. Wherein Bound indicates that the flow table has been flushed, i.e., the flow table has been flushed to Pod. Unbound indicates that no brush-in has occurred, i.e., the flow table has been flushed to Pod. Error indicates a flush failure, i.e., a flow table input pod failure. message: the status and phase descriptions are consistent.
Based on the CRD example of the network service rule resource and the flow table, when generating the flow table of the target container instance, the management and control component 10a may determine a target container instance (such as a target pod) adapted to the network service rule resource according to the container selection rule described in the podscope field of the network service rule resource; the network resource described by the routes field in the network service rule resource can be determined to be the value of the routes field in the flow table of the target container instance, that is, the network resource of the flow table of the target container xxxxyyyyyzzzz. Further, the management and control component 10a may also determine a work node where the target container group is located; and writing the identification of the working node where the target container instance is located into a tag field of a flow table of the target container group. In the CRD example of the above flow table, "com/nodename: xxxxxyyyyzzzz", nodename denotes a node name.
Since the working nodes deployed by the container instance are scheduled by the management node 10, the management node 10 may determine the correspondence between the container instance and the working nodes; and persisting the correspondence to the etcd database. Based on the etcd database, the management and control component 10a may query, by using the identifier of the target container instance, a corresponding relationship between the container instance and the working node stored in the etcd to obtain the working node where the target container instance is located. Further, the identity of the worker node where the target container instance is located may be written into a tag field of the flow table of the target container instance.
Of course, policing component 10a may also determine the values of other fields in the flow table. For example, the policing component 10a may determine, according to the name of the network service rule resource, the network service rule resource on which the flow table is generated, and write the corresponding field. As in the above flow table example "roll-for-user 1", the network service rule resource relied on is "roll-for-user 1", and so on.
After obtaining the flow table of the target container instance, the administration component 10a may also register the flow table of the target container instance to the API service component 10b in the form of CRD. The flow table registered by the API service component 10b, i.e., the CRD of the monitoring API service component 10b in fig. 2, can be monitored for the NSM-CNI component 20 b. Specifically, the NSM-CNI component 20b may acquire the identification of the working node included in the flow table registered by the API service component 10 b; and identifies the flow table of the container instance deployed in the target working node where the NSM-CNI component 20b is located, according to the identifier of the working node included in the flow table registered by the API service component 10 b. Thereafter, the NSM-CNI component 20b may flush the flow table of the container instance deployed in the target worker node into the container instance deployed by the target worker node. Optionally, the NSM-CNI component 20b may flush the flow table of the container instance deployed in the target worker node to the container instance deployed in the target worker node by means of a Remote Procedure Call (RPC).
In some embodiments, the NSM-CNI component 20b may monitor the flow tables belonging to the target worker node; and aggregating the flow table of each container instance according to the granularity of the container instance to obtain the flow table of the container instance. Aggregating flow tables of the same container instance can prevent a flow table generated after the container instance from overwriting a previously generated flow table. After aggregating the flow tables of the same container instance, the flow tables after aggregating the container instance may be refreshed to the NSM node plugin 20 e; and the NSM node plug-in 20e refreshes the data plane component 20c, thereby realizing that the flow table of the container instance is refreshed into the container instance.
After flushing the flow table of the container instance deployed in the working node where the NSM-CNI component 20b is located to the container instance deployed by the working node, the NSM-CNI component 20b may further set a flush state field (such as the above-mentioned phase field) of the flow table of the corresponding container instance to a flushed state (Bound).
In the embodiment of the present application, it is also possible for the policing component 10a to acquire a state value of a refresh state field included in the flow table of the target container instance; and judging whether the flow table of the target container instance is refreshed by the target container instance or not according to the state value of the refreshing state field contained in the flow table of the target container instance. Alternatively, the policing component 10a may determine, according to the state value of the refresh state field included in the flow table of the target container instance, the number of flow tables whose state values are in the flushed state, that is, the number of target container instances that have been successfully flushed into the flow table. The policing component 10a may also write the number of target container instances that have been successfully flushed into the flow table into the readyCount field of the network service rule resource. Further, if the number of target container instances that have been successfully flushed into the flow table is equal to the total number of the pod matched by the network service rule resource podSelector, that is, the value of totalCount in the network service rule resource is equal to the value of readyCount, it is determined that the flow table of the target container instances is flushed by the target container instances. Further, in the case that the flow table of the target container instance is refreshed by the target container instance, the policing component 10a may set a field (such as indications: nsm. For example, the ratios of the indices: the nsm. In this way, an administrator for the K8s system can obtain the routing rule completion status.
In this embodiment of the present application, after the flow table is refreshed to the container instance, the container instance may determine, based on the network resource described by the flow table, the routing information of the access destination of the container instance; and accesses the destination through the target network service described in the flow table in the case where the routing information is the network service. For example, for the flow table CRD example described above, it may be determined that the container instance access destination is 192.168.1.1/32; the routing information is a destination accessed through a network service; the name of the target network service is: ansm-vpc-xxxxxxxxxx. That is, the container instance can access the destination corresponding to 192.168.1.1/32 through the destination web service of ansm-vpc-xxxxxxxxxx.
In the embodiment of the present application, the network resources described in the flow table may be 1 or more. Plural means 2 or more than 2. Wherein the destinations of the plurality of network resources may be the same or different. In the present embodiment, when determining the destination to be accessed, the longest destination IP matching principle may be followed, that is, the destination IP with the finest granularity of the destination IP is selected as the destination to be accessed. Accordingly, the NSM-CNI component 20b may acquire destination IPs of a plurality of network resources from the flow table in the case where the network resources included in the flow table have the plurality of network resources; according to the routing lengths of the destination IPs of the various network resources, determining the destination IP with the longest routing length as a destination to be accessed by the container instance; and determining that the target network resource accessing the destination is the routing information of the container instance access destination. For example, in the above example of the flow table CRD, the destinations are respectively: 192.168.1.1/32, ANYTUNNEL and 0.0.0.0/0. Wherein, ANYTUNNEL corresponds to a certain fixed network segment, and 0.0.0.0/0 represents any IP address. Therefore, the route length of the destination IP is 192.168.1.1/32> ANYTUNNEL >0.0.0.0/0, therefore, 192.168.1.1/32 can be determined as the destination to be visited by the container instance; and determines 192.168.1.1/32 corresponding target network resources, namely network services named ansm-vpc-xxxxxxxxxx, as the routing information of the container instance. Further, an access request for a container instance may be sent to a corresponding destination of 192.168.1.1/32 through a web service named ansm-vpc-xxxxxxxxxx.
In this embodiment of the application, as shown in fig. 6, an admission controller (nsm-webhook)10c corresponding to the network service cluster 30 may also be set in the policing node 10. The admission controller 10c is a piece of code that will intercept requests arriving at the API service component after the request has been authenticated and authorized, before the object is persisted. In the embodiment of the present application, for the network service rule resource, the admission controller may detect whether destinations of multiple network resources in the network service rule resource are identical. Specifically, the admission controller 10c may detect whether class-free inter-domain routing (CIDR) of a plurality of network resources in the network service rule resources completely overlap; if the classless inter-domain routing (CIDR) of the multiple network resources in the network service rule resource are completely overlapped, the destinations of the multiple network resources in the network service rule resource are determined to be completely the same. In the case where the destinations of the plurality of types of network resources are identical, the destinations of the plurality of types of network resources in the subsequently generated flow table are also identical, and it is not possible for the NSM-CNI component 20b to determine through which type of network resource to go to the destination. Thus, in the case where the destinations of various network resources are identical, the admission controller 10c can block registration of the network service rule resource in the API service component 10b, and can prevent a subsequent container instance access error.
In the embodiment of the present application, in order to prevent the container instance from receiving the access traffic before the network completes initialization, the admission controller 10c may configure, for a container instance that can access a destination through the network service, a ready state (ready state) condition of the container instance in a container resource corresponding to the container instance as flow table refresh completion. For example, the admission controller 10c may set readinessGates in a resource inventory (Spec) of a container resource (e.g., Pod resource) to specify an additional condition list for the kubel to evaluate the readiness state of the container instance for container instances using NSM. Where the reading gates depends on the current state of the status.condition field of the Pod. If no such condition is found in the status.conditions field of the container, the status of the condition defaults to "False". The state of a Pod is Ready only if all the container states in the Pod are Ready, and the readinessGates condition of the Pod-attached extra-state detection is also Ready. Two preconditions for kubelet to determine whether a Pod is Ready: (1) all Ready (True) in Pod containers in Pod; (2) one or more conditiontypes are defined in pod. Based on this, the admission controller 10c may set the conditionType in the readessgates as flow table refresh complete (NetworkServiceFlowExpected), that is, set the ready state additional condition of the container as flow table refresh complete.
Based on the ready state additional condition of the container, the policing component 10a may set the flow table refresh state corresponding to the ready state additional condition as flow table refresh completion for the container instance to receive the access traffic when the flow table is refreshed to the container instance. For example, the flow table refresh state NetworkServiceFlowExpected corresponding to the ready state addition condition may be set to tune, so that when all containers in the Pod reach the ready state (ready), the Pod reaches the ready state (ready), and the access traffic may be received.
In this embodiment of the application, for a newly expanded Pod, in a case that the Pod does not have a matching network service rule resource (NetworkServiceRole) temporarily, the NSM management and control component 10a may set the flow table refresh state corresponding to the ready state additional condition to True, that is, the flow table refresh is completed. If a newly expanded Pod has a matched network service rule resource (NetworkServiceRole), a flow table corresponding to the Pod can be generated based on the network service rule resource; and when the status of the flow table becomes Bound (flushed state), the flow table refresh state corresponding to the ready state addition condition is set to True.
In other embodiments, a kubernets administrator updates a network service rules resource (NetworkServiceRole). The management and control component 10a may update the flow table of the Pod adapted to the network service rule resource according to the updated network service rule resource. Before the flow table is refreshed to Pod, the policing component 10a may set the flow table refresh state corresponding to the ready state additional condition to False. Thus, the Pod will not accept a new request from Service. When the flow table is refreshed to Pod and the status of the flow table is changed to Bound (flushed state), the flow table refresh state corresponding to the ready state addition condition is set to True.
The above embodiments mainly illustrate the process of the policing component 10a processing the routing rules (i.e., web service rule resources) provided by the K8s administrator and binding the routing rules with the Pod. There may be 2 controllers for the regulating assembly 10 a: (1) a network service resource rule controller; (2) a network service controller. The network service resource rule controller is mainly configured to process a routing rule (i.e., a network service rule resource) provided by the K8s administrator and bind the routing rule with the Pod, which is the process shown in the foregoing embodiment. The network service controller is mainly used for monitoring network service resources, interacting with the network service cluster and creating network services in the network service cluster. The following is an exemplary description of the process by which the policing component 10a creates a web service.
As shown in fig. 4, the administration component 10a may call the API service component 10b to monitor the web service; calling the API service component 10b to acquire the updated network service resource under the condition that the network service resource update is monitored; and creating the network service in the network service cluster according to the updated network service resource. Specifically, the management and control component 10a may interact with a coordination component 30a in a management and control node corresponding to the network service cluster 30, and create the network service in the network service cluster by invoking the coordination component 30 a. Optionally, the management and control component 10a may call the coordination component 30a in an RPC manner to create a web service in the web service cluster according to the updated web service resource. The network service resource may be a CRD resource of the K8s cluster, and may be registered in the API service component 10b in the form of CRD. An example CRD of a network service resource is described below in conjunction with fig. 5.
Data structure 4: CRD of network service resource
Figure BDA0003653017330000141
In the CRD of the network service resource, spec represents a resource list of the network service. The resource list of spec can be determined by the user at the resource end 40. The replias field indicates the number of copies of the network service. In high availability consideration, the number of copies is greater than or equal to 2 when the CRD of the network service resource is created. The userId represents the identity of the user using the network service. The userRoleName represents the name of the network service rule resource role for the user of the network service. userSecurityGroupid: a security group ID representing a network service user. The userpvcid indicates an ID of the network service. In the above network service resource example, the network service is in the form of a VPC network. The ENI created by the network service cluster is under the VPC. userVSwitches represent a list of virtual switch identifications (vswitch ids) under a web service. When the NSM creates the ENI, a vswitch with IP margin is selected from the list to create the ENI. Filling in multiple vswitches may improve the creation success rate. The status field indicates status information of the network service. The networkid represents an Id which identifies the networkid of the network service and is globally unique, and is different from the networkid created every time. Phase denotes the state of the network service. When phase is Available, it indicates that a network service is Available.
In addition to the fields of the web service resource shown in the above web service resource example, in some embodiments, the CRD of the web service resource may further include: a specHash field. The specHash field indicates a hash result of all fields of a resource list (spec) in the network service resource CRD when the policing component 10a coordinates (reconcile). When receiving an ADD to network service (ADD) or UPDATE to network service (UPDATE) event, the policing component 10a may compare the value of the specHash field with the hash result of the resource list of the network service in the network service cluster 30. If the value of the specHash field of the web service resource is the same as the hash result of the resource list of the web service in the web service cluster 30, the management and control component 10a coordinates (reconcile) and then coordinates, so that the resource consumption can be reduced.
Based on the CRD of the web service, the management and control component 10a may call the coordination component 30a in an RPC manner to create the web service in the web service cluster according to the updated web service resource. Specifically, the management and control component 10a may schedule a target work node in the work node 201 that is adapted to the updated network service resource, and bind the updated network service with the work node. When the node agent component (such as the Kubelet component in K8s) in the target working node listens that the network service is bound, the container runtime component can create and start the container (such as Pod, etc.), so as to implement the deployment of the network service in the target working node.
In addition to the computing system provided in the foregoing embodiment, the embodiment of the present application also provides a container network configuration method, and the container network configuration method provided in the embodiment of the present application is exemplarily described below from the perspective of the computing system.
Fig. 7 is a flowchart illustrating a container network configuration method according to an embodiment of the present application. As shown in fig. 7, the method for configuring a container network mainly includes:
701. and determining a container instance of the target work node deployment.
702. And accessing the container instance to the network service of the network service cluster by utilizing the NSM-CNI component in the target working node, so that the container instance can access other networks through the network service.
In order to implement container network configuration and container lifecycle decoupling, in the embodiment of the present application, a CNI (NSM-CNI for short) component corresponding to a network service cluster is set in a working node in a computing cluster by using the capability of the network service cluster to open different networks for a container. For the description of the NSM-CNI components, reference may be made to the related contents of the above system embodiments, and further description is omitted here.
In this embodiment of the present application, for any working node in a computing cluster, a target working node is defined, and in step 701, a container instance deployed by the target working node may be determined; and when the container network needs to be set, in step 702, the container instance (such as Pod, etc.) deployed in the target working node can be accessed to the network service in the network service cluster by using the NSM-CNI component in the target working node. Because the web service is a gateway abstraction that accesses the network, container instances may access other networks through the web service. The other network refers to a network other than the internal network of the computing cluster, and is mainly used for realizing the access of the container instances in the computing cluster to the nodes in the other network.
Specifically, with reference to fig. 3, fig. 4, and fig. 5, in the process of deploying the container instance by the target working node, the NSM-CNI component may be used to allocate a virtual network card to the container instance in the target working node. The virtual network card refers to a virtual network card for communicating the container instance with the network service, and may be referred to as an NSM virtual network card. For a specific implementation of allocating the virtual network card to the container instance in the target work section, reference may be made to relevant contents of the above system embodiment, and details are not described herein again.
In some embodiments, in order to avoid that some container instances that do not need the network service provided by the network service cluster are also allocated with a virtual network card for communicating with the network service cluster, it may be indicated only in the annotations (indications) of container resources (e.g., Pod resources) that container instances that need to use the network service provided by the network service cluster are allocated with a virtual network card for communicating with the network service cluster, that is, an NSM virtual network card.
In the embodiment of the present application, for a network service cluster, a data plane component corresponding to the network service cluster is set in a work node. The NSM virtual network card can be taken over to the data plane component by using the NSM-CNI component, and the connection between the NSM virtual network card (NSM0) and the network service is established by using the data plane component. In particular, network communications may be established between the data plane component and the data plane component on the worker nodes of the network service cluster. For a data plane component in a network service cluster, a back-end service is a network service, and can forward an access request of a container instance in a working node in the computing cluster to the network service and access other networks through the network service. The network configuration process moves the network configuration capabilities of the container out of the container's lifecycle so that the container's network namespace (net namespace) is unaware of container network changes.
In the computing system provided by the embodiment of the application, by using the capability that the network service in the network service cluster can open different networks for the container, a container network interface component (NSM-CNI component) corresponding to the network service cluster is added in the working node of the computing cluster, and the NSM-CNI component can access the container instance deployed in the working node to the network service, so that the container instance can access other networks through the network service. The NSM-CNI component can access the container instances deployed in the working nodes to the network service process, can occur at any stage of the life cycle of the container instances, realizes the decoupling of the container network configuration and the life cycle of the container, and is beneficial to improving the flexibility of the container network configuration. For example, the container instance is in a Runtime (Runtime) state, network properties of the container may also be dynamically changed, and so on.
In the embodiment of the application, when the container instance runs, if some or a certain network needs to be accessed, only the binding condition of the container and the network rule needs to be updated to the Custom Resource (CRD) of the computing system. In the embodiment of the present application, in order to monitor the network demand resource of the container instance, a management and control component corresponding to the network service cluster may be added to the management and control node. The management and control component is a logic function component, and mainly realizes monitoring of network demand resources of the container instance and adding of a customized network demand processing logic when monitoring of an update or creation event of a CRD (resource description device) aiming at the network demand resources. For a management end of a computing system, when an administrator of the computing system needs to configure a network of container instances, a network service rule (network service role) resource may be registered in an API service (API server) component in a CRD form. For the network service rule resource, the network service rule resource may include: a container selection (Pod selector) rule field, a routing field, etc. Wherein the container selection rule field is used for determining a container applicable to the network service rule resource; the routing field is used to determine the network information applicable to the container of the network service rule resource. For CRD example of the network service rule resource, reference may be made to the related contents of the above system embodiments, and refer to the above system embodiments.
Based on the network service rule resource, monitoring the network service rule resource by using a control component; and under the condition that a new Network Service rule resource is monitored, generating a flow table reflecting the Network requirement of the container instance, namely a Network Service flow table (Network Service Flows) according to the new Network Service rule resource. In the embodiment of the present application, the new network service rule resources include: the API service component adds the originally non-existent network service rule resource, and may also include: the network service rule resources in the API service component are updated.
Specifically, the target container instance adapted to the network service rule resource may be determined according to a container selection rule in the network service rule resource. Namely, the target container instance adapted to the network service rule resource is determined according to the value of the podSelector field in the network service rule resource. Wherein, the number of the target container instances can be 1 or more. Plural means 2 or more. Multiple container instances may be deployed at the same worker node or at different worker nodes.
Further, the policing component may be utilized to obtain the network resource of the target container instance, i.e., the value of the Routes field, from the resource request of the network service rule resource. The policing component may determine that the network resource of the target container instance is a network resource in a Flow table of the target container instance, thereby obtaining a Flow table (Flow) of the target container instance.
In the embodiment of the present application, the flow table of the container instance may be registered to the API service component as a CRD manner. The network resources in the flow table of the target container instance may reflect the network requirements of the target container instance. For CRD implementation of the flow table, see data structure 3 above, and will not be described in detail here.
Based on the CRD example of the network service rule resource and the flow table, when the flow table of the target container instance is generated, the target container instance adaptive to the network service rule resource can be determined according to the container selection rule described by the Podselector field of the network service rule resource; the network resource described by the routes field in the network service rule resource can be determined, and the value of the routes field in the flow table of the target container instance is the network resource of the flow table of the target container instance. Further, the working node where the target container instance is located can be determined; and writing the identification of the working node where the target container instance is located into a tag field of a flow table of the target container instance.
Of course, the policing component may also be utilized to determine the values of other fields in the flow table. For example, the network service rule resource on which the flow table is generated may be determined according to the name of the network service rule resource, and the corresponding field may be written. After the flow table of the target container instance is obtained, the flow table of the target container instance can be registered to the API service component in a CRD form by using the management and control component. Correspondingly, the NSM-CNI component can be used for monitoring the flow table registered by the API service component, and refreshing the updated flow table to the container instance of the target working node when the situation that the flow table corresponding to the container instance of the target working node is updated is monitored.
Specifically, the NSM-CNI component can be utilized to acquire the identification of the working node contained in the flow table registered by the API service component; and identifying the flow table of the container instance deployed in the target working node according to the identifier of the working node contained in the flow table registered by the API service component. Further, when the flow table of the container instance deployed by the target working node is updated, the updated flow table corresponding to the container instance deployed by the target working node can be refreshed into the container instance deployed by the target working node by using the NSM-CNI component. Optionally, the updated flow table corresponding to the container instance deployed in the target worker node may be refreshed into the container instance deployed in the target worker node in a Remote Procedure Call (RPC) manner.
In some embodiments, the NSM-CNI component monitors the flow tables belonging to its target worker node; and aggregating the flow table of each container instance according to the granularity of the container instance to obtain the flow table of the container instance. Aggregating flow tables for the same container instance may prevent a flow table generated after the container instance from overwriting a previously generated flow table. After aggregating the flow tables of the same container instance, the aggregated flow tables of the container instance may be refreshed into the container instance.
After the flow table of the container instance deployed in the working node where the NSM-CNI component is located is refreshed to the container instance deployed by the working node, the refresh state field (such as the above-mentioned phase field) of the flow table of the corresponding container instance may also be set to the flushed state (Bound) by using the NSM-CNI component.
In this embodiment of the present application, the NSM policing component may further be configured to determine whether the updated flow table is flushed by the target container instance based on the state value of the flush state field of the updated flow table. The target container instance refers to a container instance determined by a container selection rule of a network service rule resource for generating an updated flow table. Optionally, the state value of the refresh state field included in the flow table of the target container instance corresponding to the updated flow table may be acquired by the NSM management component. Further, whether the flow table of the target container instance is refreshed by the target container instance is judged according to the state value of the refresh state field contained in the flow table of the target container instance. Alternatively, the number of flow tables whose state values are in the flushed state, that is, the number of target container instances that have been successfully flushed into the flow tables, may be determined according to the state value of the flush state field included in the flow table of the target container instance. Further, if the number of target container instances that have been successfully flushed into the flow table is equal to the total number of the pod matched by the network service rule resource podSelector, that is, the value of totalCount in the network service rule resource is equal to the value of readyCount, it is determined that the flow table of the target container instances is flushed by the target container instances. Further, in the case that the flow table of the target container instance is refreshed by the target container instance, a field in the network service rule resource that characterizes the completion state of the routing rule may be set as an identifier that characterizes the completion. In this way, an administrator for the K8s system can obtain the routing rule completion status.
In this embodiment of the present application, after the flow table is refreshed to the container instance, the container instance may determine, based on the network resource described by the flow table, the routing information of the access destination of the container instance; and accesses the destination through the target network service described in the flow table in the case where the routing information is the network service. In the embodiment of the present application, the network resources described in the flow table may be 1 or more. The plurality means 2 or more than 2. Wherein the destinations of the plurality of network resources may be the same or different. In the present embodiment, when determining the destination to be accessed, the longest destination IP matching principle may be followed, that is, the destination IP with the finest granularity of the destination IP is selected as the destination to be accessed. Correspondingly, under the condition that the network resources contained in the flow table have various network resources, the NSM-CNI component can be utilized to acquire the destination IP of the various network resources from the flow table; according to the routing lengths of the destination IPs of the various network resources, determining the destination IP with the longest routing length as a destination to be accessed by the container instance; and determining that the target network resource accessing the destination is the routing information of the container instance access destination. In the embodiment of the present application, an admission controller (NSM-webhook), that is, an NSM admission controller, corresponding to the network service cluster may also be set in the management and control node. For the network service rule resource, the NSM admission controller may be used to detect whether the destinations of multiple network resources in the network service rule resource are identical. In the case where the destinations of the various network resources are identical, registration of the network service rule resources in the API service component may be prevented by the NSM admission controller, which may prevent subsequent container instance access errors.
In this embodiment of the present application, in order to prevent the container instance from receiving access traffic before the network completes initialization, for a container instance using the NSM, the NSM admission controller may be used to configure an additional condition of a ready state (ready state) of the container instance in a container resource corresponding to the container instance as flow table refresh completion. Based on the ready state additional condition of the container, the NSM management component may be utilized to set the flow table refresh state corresponding to the ready state additional condition as the completion of the flow table refresh when the flow table is refreshed to the container instance, so that the container instance receives the access flow. In other embodiments, the kubernets administrator updates a network service rules resource (NetworkServiceRole). And updating the flow table of the Pod adaptive to the network service rule resource by using the management and control component according to the updated network service rule resource. Before the flow table is refreshed to Pod, the flow table refreshing state corresponding to the ready state additional condition can be set to False by using the management and control component. Thus, the Pod will not accept new requests from Service. Further, when the flow table is refreshed to Pod and the status of the flow table becomes Bound (flushed state), the flow table refresh state corresponding to the ready state additional condition may be set to True by using the policing component.
In the embodiment of the application, the management and control component can be further utilized to create the network service in the network service cluster. Specifically, a management and control component can be utilized to monitor network services; under the condition that the network service resource updating is monitored, the management and control component is used for calling the API service component to obtain the updated network service resource; and creating the network service in the network service cluster according to the updated network service resource.
It should be noted that, the executing subjects of the steps of the method provided in the foregoing embodiments may be the same device, or different devices may also be used as the executing subjects of the method. For example, the execution subjects of steps 701 and 702 may be device a; for another example, the execution subject of step 701 may be device a, and the execution subject of step 702 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 701, 702, etc., are merely used for distinguishing different operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the container network configuration method described above.
It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit the types of "first" and "second".
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
The storage medium of the computer is a readable storage medium, which may also be referred to as a readable medium. Readable storage media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (14)

1. A computing system, comprising: the system comprises a control node, a computing cluster and a network service cluster; the computing cluster comprises a plurality of worker nodes; the network service cluster is used for deploying network services;
the work node includes: a Container Network Interface (CNI) component corresponding to the network service cluster;
the CNI component is used for accessing the container instances deployed in the working nodes to the network service;
the container instance accesses other networks through the web service.
2. The system of claim 1, wherein the worker node further comprises: the data plane component corresponding to the network service cluster;
the CNI component is further configured to allocate a virtual network card to the container instance and take over the virtual network card to the data plane component in the process of deploying the container instance by the working node;
the data plane component is used for establishing connection between the virtual network card and the network service so as to access the container instance to the network service.
3. The system of claim 1, wherein the managing node comprises: the management and control component corresponds to the network service cluster;
the management and control component is used for monitoring network service rule resources; the network service rule resource is a custom resource registered in the control node;
generating a flow table reflecting the network requirements of the container instance according to the new network service rule resource under the condition that the new network service rule resource is monitored;
the CNI component is to flush the flow table to the container instance;
the container instance determining routing information of the container instance access destination based on the flow table; and if the routing information is a destination accessed through a network service, accessing the destination through a target network service in the flow table.
4. The system of claim 3, wherein the managing node further comprises: an admission controller for the network service cluster; the admission controller is further configured to:
for a second container instance using a network service grid, configuring a ready state condition of the second container instance in a container resource corresponding to the second container instance as flow table refreshing completion;
the management and control component is further used for: and when the flow table is refreshed to the second container instance, setting the flow table refreshing state corresponding to the ready state additional condition as the flow table refreshing completion so that the second container instance can receive the access flow.
5. The system of any of claims 1-4, wherein the administration component is further configured to:
under the condition that the network service resources are updated in the API service component, calling the API service component to acquire the updated network service resources;
and creating a network service in the network service cluster according to the updated network service resource.
6. A method for configuring a container network, comprising:
determining a container instance deployed by a target working node;
and accessing the container instance to the network service of the network service cluster by utilizing a CNI component corresponding to the network service cluster in the target working node, so that the container instance can access other networks through the network service.
7. The method of claim 6, wherein the accessing the container instance to a network service of a network service cluster using a CNI component in a target worker node corresponding to the network service cluster comprises:
in the container instance deployment process, distributing a virtual network card for the container instance by using the CNI component;
taking over the virtual network card to a data plane component corresponding to the network service cluster by utilizing the CNI component;
establishing, with the data plane component, a connection between the data plane component and the network service to access the container instance to the network service.
8. The method of claim 6, further comprising:
monitoring, with the CNI component, a flow table registered in an API service component that reflects network requirements of the container instance;
and under the condition that the flow table corresponding to the container instance is monitored to be updated, refreshing the updated flow table to the container instance.
9. The method of claim 8, further comprising:
monitoring network service rule resources registered in the API service component by using a management and control component corresponding to the network service cluster in a management and control node; the network service rule resource is a self-defined resource;
and under the condition that the existence of new network service rule resources is monitored, generating the updated flow table by utilizing the control component according to the new network service rule resources.
10. The method of claim 9, further comprising:
after flushing the updated flow table to the container instance, setting a flush status field of the updated flow table to a flushed status with the CNI component;
determining whether the updated flow table is flushed by a target container group based on a state value of a flush state field of the updated flow table; the target container group refers to a container instance determined by a container selection rule of a network service rule resource for generating the updated flow table;
and under the condition that the updated flow table is refreshed by the target container instance, setting the routing rule completion state field of the new network service rule resource to be an identifier for representing completion by using the management and control component, so that the management end of the new network service rule resource can acquire the routing rule completion state.
11. The method of claim 9, further comprising:
detecting whether destinations of network resources in network service rule resources are completely the same by using an admission controller corresponding to the network service cluster in the control node;
and if the destinations of the network resources in the network service rule resources are identical, preventing the network service rule resources from being registered in the API service component by using the admission controller.
12. The method of claim 11, wherein the container instance uses a web services network, the method further comprising:
configuring the ready state condition of the container instance in the container resource corresponding to the container instance by using the admission controller as flow table refreshing completion;
and under the condition that the updated flow table is refreshed to the container instance, setting a flow table refreshing state corresponding to the ready state additional condition to be flow table refreshing completion by using the management and control component so that the container instance receives access flow.
13. The method according to any one of claims 6-12, further comprising:
under the condition that the network service resources are updated in the API service component, calling the API service component to acquire the updated network service resources by using a control component in a control node;
and creating a network service in the network service cluster according to the updated network service resource.
14. A computer-readable storage medium having stored thereon computer instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of any one of claims 6-13.
CN202210557898.7A 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium Active CN115086166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557898.7A CN115086166B (en) 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557898.7A CN115086166B (en) 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium

Publications (2)

Publication Number Publication Date
CN115086166A true CN115086166A (en) 2022-09-20
CN115086166B CN115086166B (en) 2024-03-08

Family

ID=83249063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557898.7A Active CN115086166B (en) 2022-05-19 2022-05-19 Computing system, container network configuration method, and storage medium

Country Status (1)

Country Link
CN (1) CN115086166B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116319322A (en) * 2023-05-16 2023-06-23 北京国电通网络技术有限公司 Power equipment node communication connection method, device, equipment and computer medium
CN116389252A (en) * 2023-03-30 2023-07-04 安超云软件有限公司 Method, device, system, electronic equipment and storage medium for updating container network

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989091A (en) * 2018-06-22 2018-12-11 杭州才云科技有限公司 Based on the tenant network partition method of Kubernetes network, storage medium, electronic equipment
CN109582441A (en) * 2018-11-30 2019-04-05 北京百度网讯科技有限公司 For providing system, the method and apparatus of container service
EP3617880A1 (en) * 2018-08-30 2020-03-04 Juniper Networks, Inc. Multiple networks for virtual execution elements
CN111371627A (en) * 2020-03-24 2020-07-03 广西梯度科技有限公司 Method for setting multiple IP (Internet protocol) in Pod in Kubernetes
CN112187671A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Network access method and related equipment thereof
US20210064442A1 (en) * 2019-08-29 2021-03-04 Robin Systems, Inc. Implementing An Application Manifest In A Node-Specific Manner Using An Intent-Based Orchestrator
CN113300985A (en) * 2021-03-30 2021-08-24 阿里巴巴新加坡控股有限公司 Data processing method, device, equipment and storage medium
US20210311762A1 (en) * 2020-04-02 2021-10-07 Vmware, Inc. Guest cluster deployed as virtual extension of management cluster in a virtualized computing system
CN113709810A (en) * 2021-08-30 2021-11-26 河南星环众志信息科技有限公司 Method, device and medium for configuring network service quality
CN113760452A (en) * 2021-08-02 2021-12-07 阿里巴巴新加坡控股有限公司 Container scheduling method, system, equipment and storage medium
CN114172802A (en) * 2021-12-01 2022-03-11 百果园技术(新加坡)有限公司 Container network configuration method and device, computing node, main node and storage medium
WO2022056845A1 (en) * 2020-09-18 2022-03-24 Zte Corporation A method of container cluster management and system thereof
CN114237812A (en) * 2021-11-10 2022-03-25 上海浦东发展银行股份有限公司 Container network management system
US20220279421A1 (en) * 2021-03-01 2022-09-01 Juniper Networks, Inc. Containerized router with a generic data plane interface

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989091A (en) * 2018-06-22 2018-12-11 杭州才云科技有限公司 Based on the tenant network partition method of Kubernetes network, storage medium, electronic equipment
EP3617880A1 (en) * 2018-08-30 2020-03-04 Juniper Networks, Inc. Multiple networks for virtual execution elements
CN109582441A (en) * 2018-11-30 2019-04-05 北京百度网讯科技有限公司 For providing system, the method and apparatus of container service
US20210064442A1 (en) * 2019-08-29 2021-03-04 Robin Systems, Inc. Implementing An Application Manifest In A Node-Specific Manner Using An Intent-Based Orchestrator
CN111371627A (en) * 2020-03-24 2020-07-03 广西梯度科技有限公司 Method for setting multiple IP (Internet protocol) in Pod in Kubernetes
US20210311762A1 (en) * 2020-04-02 2021-10-07 Vmware, Inc. Guest cluster deployed as virtual extension of management cluster in a virtualized computing system
WO2022056845A1 (en) * 2020-09-18 2022-03-24 Zte Corporation A method of container cluster management and system thereof
CN112187671A (en) * 2020-11-05 2021-01-05 北京金山云网络技术有限公司 Network access method and related equipment thereof
US20220279421A1 (en) * 2021-03-01 2022-09-01 Juniper Networks, Inc. Containerized router with a generic data plane interface
CN113300985A (en) * 2021-03-30 2021-08-24 阿里巴巴新加坡控股有限公司 Data processing method, device, equipment and storage medium
CN113760452A (en) * 2021-08-02 2021-12-07 阿里巴巴新加坡控股有限公司 Container scheduling method, system, equipment and storage medium
CN113709810A (en) * 2021-08-30 2021-11-26 河南星环众志信息科技有限公司 Method, device and medium for configuring network service quality
CN114237812A (en) * 2021-11-10 2022-03-25 上海浦东发展银行股份有限公司 Container network management system
CN114172802A (en) * 2021-12-01 2022-03-11 百果园技术(新加坡)有限公司 Container network configuration method and device, computing node, main node and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116389252A (en) * 2023-03-30 2023-07-04 安超云软件有限公司 Method, device, system, electronic equipment and storage medium for updating container network
CN116389252B (en) * 2023-03-30 2024-01-02 安超云软件有限公司 Method, device, system, electronic equipment and storage medium for updating container network
CN116319322A (en) * 2023-05-16 2023-06-23 北京国电通网络技术有限公司 Power equipment node communication connection method, device, equipment and computer medium

Also Published As

Publication number Publication date
CN115086166B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
JP7197612B2 (en) Execution of auxiliary functions on on-demand network code execution systems
US10564946B1 (en) Dependency handling in an on-demand network code execution system
US10701139B2 (en) Life cycle management method and apparatus
US10817331B2 (en) Execution of auxiliary functions in an on-demand network code execution system
CN110462589B (en) On-demand code execution in a local device coordinator
US10719369B1 (en) Network interfaces for containers running on a virtual machine instance in a distributed computing environment
US10360067B1 (en) Dynamic function calls in an on-demand network code execution system
CA2914802C (en) Distributed lock management in a cloud computing environment
US8010651B2 (en) Executing programs based on user-specified constraints
KR20210019533A (en) Operating system customization in on-demand network code execution systems
US9413819B1 (en) Operating system interface implementation using network-accessible services
US10318347B1 (en) Virtualized tasks in an on-demand network code execution system
CN109189334B (en) Block chain network service platform, capacity expansion method thereof and storage medium
CN110352401B (en) Local device coordinator with on-demand code execution capability
EP3455728A1 (en) Orchestrator for a virtual network platform as a service (vnpaas)
EP3905588A1 (en) Cloud platform deployment method and apparatus, server and storage medium
CN115086166B (en) Computing system, container network configuration method, and storage medium
US11201930B2 (en) Scalable message passing architecture in a cloud environment
CN113301116A (en) Cross-network communication method, device, system and equipment for microservice application
CN113810230A (en) Method, device and system for carrying out network configuration on containers in container cluster
US20210203714A1 (en) System and method for identifying capabilities and limitations of an orchestration based application integration
US11178252B1 (en) System and method for intelligent distribution of integration artifacts and runtime requests across geographic regions
US20230138867A1 (en) Methods for application deployment across multiple computing domains and devices thereof
CN110347473B (en) Method and device for distributing virtual machines of virtualized network elements distributed across data centers
US11671353B2 (en) Distributed health monitoring and rerouting in a computer network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant