CN116633775A - Container communication method and system of multi-container network interface - Google Patents
Container communication method and system of multi-container network interface Download PDFInfo
- Publication number
- CN116633775A CN116633775A CN202310904443.2A CN202310904443A CN116633775A CN 116633775 A CN116633775 A CN 116633775A CN 202310904443 A CN202310904443 A CN 202310904443A CN 116633775 A CN116633775 A CN 116633775A
- Authority
- CN
- China
- Prior art keywords
- service
- pod
- container
- configuration information
- agent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000004891 communication Methods 0.000 title claims abstract description 26
- 239000002071 nanotube Substances 0.000 claims abstract description 30
- 238000012544 monitoring process Methods 0.000 claims abstract description 20
- 230000008859 change Effects 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims description 56
- 230000003068 static effect Effects 0.000 claims description 24
- 230000004048 modification Effects 0.000 claims description 7
- 238000012986 modification Methods 0.000 claims description 7
- 230000007547 defect Effects 0.000 abstract description 4
- 239000003795 chemical substances by application Substances 0.000 description 89
- 239000008186 active pharmaceutical agent Substances 0.000 description 43
- 102100033121 Transcription factor 21 Human genes 0.000 description 20
- 101710119687 Transcription factor 21 Proteins 0.000 description 20
- 101100491335 Caenorhabditis elegans mat-2 gene Proteins 0.000 description 12
- 101100513046 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) eth-1 gene Proteins 0.000 description 8
- 230000006872 improvement Effects 0.000 description 8
- 230000004044 response Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 101100495256 Caenorhabditis elegans mat-3 gene Proteins 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 241000322338 Loeseliastrum Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- -1 pod2 Proteins 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a container communication method and a system of a multi-container network interface, wherein the method comprises the following steps: creating a service agent in the Pod corresponding to the name space to which the pre-configured label of the working node belongs, monitoring the change of the service resource in the working node, acquiring the configuration information of the service resource, and judging whether the configuration information contains the custom forwarding annotation; if yes, selecting a target Pod from a plurality of pods corresponding to the service resources based on a polling algorithm, and directly responding to an external request by the target Pod; if not, determining a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, and indirectly responding to the external request by the Pod of the nano-tube service agent based on the target container network interface. According to the application, the container network interface access matched with the back-end network object of the service resource is realized, and the defect that the CNI plug-in lacks Pod access service resource capability is overcome.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and a system for container communications with a multi-container network interface.
Background
The container network interface (Container Network Interface, CNI) is an interface standard defined by Kubernetes for container network configuration, and the CNI plug-in refers to a network configuration tool serving the CNI standard. Service resources (services) are core resources for implementing micro services by Kubernetes clusters, and are used to abstract a stable network access address for a group of Pod that provides services. In Kubernetes clusters, multiple CNI plugins are typically used to configure multiple container network interfaces for Pod, and since Pod is insensitive to IP addresses, pod will change after rebuilding, so service resources are typically used to provide uniform access to Pod. In the multi-CNI plug-in application scenario of the Kubernetes cluster, in the creation process of Pod and service resources, the back end of the service resources is generally matched with a Pod default network, and the Pod cannot access the service resources easily caused by the mismatch of different network models.
Although the prior art provides a solution for designating a service resource back-end network, certain limitations exist, and the flow of the Pod accessing the service resource cannot be finely managed, and the problem that the Pod cannot access the service resource exists. For example, two Pod and one service resource are deployed in the Kubernetes cluster, two container network interfaces (i.e., eth0 interface and eth1 interface) are configured for the Pod through the Calico plug-in and the IPvlan plug-in, respectively, and the service resource matches one Pod and designates its backend network (e.g., eth0 interface). When another Pod accesses the service resource, if the requested traffic is sent from the eth1 interface by default, and the service resource backend network is designated as the eth0 interface, the network models of the sender and the receiver are inconsistent, so that the Pod cannot access the service resource. In addition, if the IPvlan plug-in does not have the capability of Pod to access the service resource, even if the network models of the sender and the receiver are consistent, the Pod may not access the service resource.
In view of this, there is a need for an improved method for implementing Pod access to service resources under a multi-container network interface in the prior art to solve the above-mentioned problems.
Disclosure of Invention
The application aims to solve the problems that the service resource back end is generally matched with the default network of the Pod and the capacity of the Pod for accessing the service resource is influenced because the flow of the Pod for accessing the service resource cannot be finely managed in the prior art.
To achieve the above object, the present application provides a container communication method of a multi-container network interface, including:
an access control module deployed in a cluster executes monitoring operation on an API server deployed in a control node contained in the cluster, a service agent is created in a Pod corresponding to a name space to which a pre-configured label of a working node contained in the cluster belongs, the change of service resources in the working node is monitored, configuration information corresponding to the service resources is obtained, and whether the configuration information contains a custom forwarding annotation is judged;
if yes, selecting a target Pod from a plurality of pods corresponding to the service resources based on a polling algorithm, so that the target Pod directly responds to an external request;
if not, determining a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, and indirectly responding to an external request by the Pod of the nano-tube service agent based on the target container network interface.
As a further improvement of the present application, the in-Pod creation service agent includes:
and the access control module executes monitoring operation on the API server, intercepts the Pod processing request when the Pod processing request is monitored, acquires a name space to which the Pod corresponding to the Pod processing request belongs, executes modification operation on the Pod processing request when the name space to which the Pod belongs is marked as a pre-configured label, and creates a service agent in the Pod based on the modified Pod processing request by the API server.
As a further improvement of the application, the service agent is deployed in a sidecar mode within the Pod;
the performing a modification operation on the Pod processing request includes:
and adding a service agent container into a container list contained in the specification corresponding to the Pod to form a modified Pod processing request.
As a further improvement of the application, the change of the service resources in the monitoring working node is realized by the service management module deployed in the cluster executing monitoring operation on the API server, and the service management module acquires the configuration information corresponding to the service resources.
As a further improvement of the present application, there is also included:
and the service management module executes monitoring operation on the API server, and determines a back-end network object corresponding to the service resource based on whether the service resource processing request carries the designated network annotation when the service resource processing request is monitored.
As a further improvement of the present application, the determining, based on whether the service resource processing request carries a specified network annotation, a backend network object corresponding to the service resource includes:
judging whether the service resource processing request carries a designated network annotation or not;
if so, updating the back-end network object corresponding to the service resource into the appointed network object corresponding to the appointed network annotation;
if not, the updating of the back-end network object corresponding to the service resource is refused, so that the back-end network object corresponding to the service resource is determined by the default network object.
As a further improvement of the present application, after the service management module obtains the configuration information corresponding to the service resource, the method further includes:
the service management module issues the configuration information to a service interface module for judging whether the configuration information contains a custom forwarding annotation and is deployed in a cluster, and the service interface module generates a flow agent starting instruction or a static routing updating instruction according to a judging result and issues the flow agent starting instruction or the static routing updating instruction to a service agent deployed in a Pod.
As a further improvement of the present application,
if the configuration information contains a custom forwarding annotation, the service interface module issues a flow agent opening instruction to the service agent, and the service agent opens a load balancing service to elect a target Pod from a plurality of pods corresponding to the service resource based on a polling algorithm, so that the target Pod directly responds to an external request;
if the configuration information does not contain the custom forwarding annotation, the service interface module issues an update static routing instruction to the service agent, and the service agent determines a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to an external request based on the target container network interface.
As a further improvement of the application, a remote procedure call connection is established between the service interface module and the service proxy through a remote procedure call protocol, and the service proxy receives an opening flow proxy instruction and an updating static routing instruction issued by the service interface module based on the remote procedure call connection.
Based on the same inventive idea, the present application also provides a container communication system of a multi-container network interface, comprising:
the acquisition module is used for executing monitoring operation on an API server deployed on a control node contained in the cluster by an access control module deployed on the cluster, creating a service agent in a Pod corresponding to a name space to which a pre-configured label of a working node contained in the cluster belongs, monitoring the change of service resources in the working node, acquiring configuration information corresponding to the service resources, and judging whether the configuration information contains a custom forwarding annotation;
the execution module is used for selecting a target Pod from a plurality of pods corresponding to the service resources based on a polling algorithm if the target Pod is in the service resource, so that the target Pod directly responds to an external request; if not, the execution module determines a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to an external request based on the target container network interface.
Compared with the prior art, the application has the beneficial effects that:
an access control module deployed in a cluster executes monitoring operation on an API server deployed in a control node contained in the cluster, a service agent is created in a Pod corresponding to a name space to which a pre-configured label of a working node contained in the cluster belongs, the change of service resources in the working node is monitored, configuration information corresponding to the service resources is obtained, and whether the configuration information contains a custom forwarding annotation is judged; if so, selecting a target Pod from a plurality of pods corresponding to the service resource based on a polling algorithm, so that the target Pod directly responds to an external request, and overcoming the defect that some CNI plug-ins in the prior art lack of Pod access capacity; if not, determining a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to an external request based on the target container network interface, thereby realizing the access from the container network interface matched with the back-end network object of the service resource, and finally achieving the purpose of finely managing the flow of the Pod accessing the service resource.
Drawings
FIG. 1 is a flow chart of a method of container communication for a multi-container network interface according to the present application;
FIG. 2 is a topology of a computer system;
FIG. 3 is a topology diagram of a target container network interface corresponding to determining Pod access to a service resource;
fig. 4 is a topology diagram of a container communication system of a multi-container network interface according to the present application.
Detailed Description
The present application will be described in detail below with reference to the embodiments shown in the drawings, but it should be understood that the embodiments are not limited to the present application, and functional, method, or structural equivalents and alternatives according to the embodiments are within the scope of protection of the present application by those skilled in the art.
It should be noted that, the term "Service resource" refers to "Service" for abstracting a stable network access address for a Pod that provides a Service; the term "kubectl" is a CTL command line tool for communicating with the API server, organizing commands entered by a user at the command line and converting the commands into information identifiable by the API server; the term "API server" refers to an API Service responsible for communication between the various functional modules of the Kubernetes cluster.
Referring to fig. 1 to fig. 3, the present application discloses a specific implementation manner of a container communication method (hereinafter referred to as a "method") of a multi-container network interface, and specifically, an admission control module deployed in a cluster performs a monitoring operation on an API server deployed in a control node included in the cluster, creates a service agent in a Pod corresponding to a naming space to which a preconfigured label of a working node included in the cluster belongs, monitors a service resource change in the working node, acquires configuration information corresponding to the service resource, and determines whether the configuration information includes a custom forwarding annotation; if yes, selecting a target Pod from a plurality of pods corresponding to the service resources based on a polling algorithm, so that the target Pod directly responds to an external request; if not, determining a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to an external request based on the target container network interface, and solving the problem that the capacity of the Pod to access the service resources is affected because the back end of the service resources is usually matched with the default network of the Pod and the flow of the Pod to access the service resources cannot be finely managed in the prior art. The method may run in a cluster and is exemplified by Kubernetes cluster 10 shown in fig. 1, with Kubernetes cluster 10 deployed on computer system 100; meanwhile, the applicant points out that the computer system 100 may also be understood as a service or system formed by a super fusion all-in-one machine, a computer, a server, a data center, or a portable terminal through a virtualization technology.
Referring to fig. 2, a Kubernetes cluster 10 is deployed within a computer system 100, and a Kubectl11, a control node 12, a service controller 13, and at least one working node are deployed within the Kubernetes cluster 10, and only a working node 14 is shown in fig. 2 for exemplary purposes. An API server is deployed within the control node 12 for enabling interaction with Kubectl11, an admission control module 131 deployed within the service controller 13, and a service management module 132. The service controller 13 houses an admission control module 131, a service management module 132, and a service interface module 133; the service controller 13 is configured to monitor a Pod processing request of the API server 121 and obtain a namespace to which a Pod corresponding to the Pod processing request belongs, and create service management for a Pod corresponding to the namespace to which the preconfigured tag belongs; the service management module 132 is configured to monitor a service resource processing request of the API server 121, determine a back-end network object corresponding to a service resource based on whether the service resource processing request carries a specified network annotation, update the back-end network object corresponding to the service resource, and obtain configuration information corresponding to the service resource and send the configuration information to the service interface module; the service interface module 133 is configured to determine whether the configuration information issued by the service management module 132 includes a custom forwarding annotation generation start flow agent instruction or update static routing instruction and issue the flow agent instruction to a service agent deployed in a container (i.e., pod). At least one container (i.e., pod) and at least one service resource 15 are deployed within the worker node 14, and in fig. 2, containers 1 (Pod 1), 2 (Pod 2), 3 (Pod 3), and 4 (Pod 4) are shown within the worker node 14, with the service agent 1411 and the application 1412 being deployed within the container 1, the service agent 1421 and the application 1422 being deployed within the container 2, the application 1432 being deployed within the container 3, and a stable network access address being illustratively described as a set of containers from the container 1 to the container 4 and being abstracted by the service resource 15 for the set of containers.
Specifically, referring to fig. 1, a container communication method of a multi-container network interface includes the following steps S1 to S4.
Step S1, an access control module deployed in the cluster executes monitoring operation on an API server deployed in a control node contained in the cluster, a service agent is created in a Pod corresponding to a name space to which a pre-configured label of a working node contained in the cluster belongs, and changes of service resources in the working node are monitored to obtain configuration information corresponding to the service resources.
Illustratively, creating a service proxy in the Pod corresponding to the namespace to which the preconfigured label of the working node included in the cluster belongs in step S1 is implemented by the admission control module 131 deployed in the cluster (i.e., kubernetes cluster 10) performing a listening operation on the API server 121 deployed in the control node 12 included in the cluster (i.e., kubernetes cluster 10), and includes: the admission control module 131 performs a listening operation on the API server 121, intercepts the Pod processing request when the Pod processing request is monitored, acquires a namespace to which the Pod corresponding to the Pod processing request belongs, performs a modifying operation on the Pod processing request when the namespace to which the Pod belongs is marked as a preconfigured tag, and creates a service agent in the Pod based on the modified Pod processing request by the API server 121. The service agent is deployed in the Pod in a side car mode, and performs modification operation on the Pod processing request, including: and adding a service agent container into a container list contained in the specification corresponding to the Pod to form a modified Pod processing request based on the container list formed after the service agent container is added. In step S1, the monitoring of service resource changes in the working node is implemented by the service management module 132 deployed in the cluster (i.e. Kubernetes cluster 10) performing a monitoring operation on the API server 121, and the service management module 132 obtains configuration information corresponding to the service resource, and issues the configuration information to the service interface module 133. Meanwhile, the service management module 132 performs a listening operation on the API server 121, and determines, when a service resource processing request is listened to, a backend network object corresponding to the service resource based on whether the service resource processing request carries a specified network annotation, including: judging whether the service resource processing request carries a designated network annotation or not; if so, updating the back-end network object corresponding to the service resource into the appointed network object corresponding to the appointed network annotation; if not, the back-end network object corresponding to the service resource is refused to update, so that the back-end network object corresponding to the service resource is determined by the default network object.
It should be noted that the "side car mode" is a design mode of a distributed architecture, and the purpose of control and logic separation is achieved by adding a "side car" to the application service. The specifications to which the Pod corresponds define the overall attributes of the Pod, such as required resources, volume mounts, network settings, etc., and the container list is part of the specifications defining the detailed configuration and specifications of each container in the Pod. The "configuration information" includes one or any combination of several of service resource names, selectors, types, ports, session affinities, environment variables, specified network objects, and is used to define and manage access modes and network details of a set of related Pod, providing stable network entry, load balancing, and service discovery functions. The foregoing selectors include, but are not limited to, tag selectors, field selectors, node selectors, and the like.
S2, judging whether the configuration information contains a custom forwarding annotation; if yes, executing step S3; if not, step S4 is performed.
Illustratively, after the service management module 132 issues the configuration information to the service interface module 133, the service interface module 133 determines whether the configuration information includes a custom forwarding annotation, generates a flow agent opening instruction or updates a static routing instruction according to the determination result and issues the flow agent opening instruction to a service agent deployed in the Pod, and specifically, if the configuration information includes the custom forwarding annotation, the service interface module 133 issues a flow agent opening instruction to the service agent, and the service agent opens the load balancing service; if the configuration information does not contain the custom forwarding annotation, the service interface module issues an update static routing instruction to the service agent. The service agent receives the open flow agent instruction and the update static route instruction issued by the service interface module 133 based on the remote procedure call connection.
And step S3, selecting a target Pod from a plurality of pods corresponding to the service resource based on a polling algorithm, so that the target Pod directly responds to an external request.
And S4, determining a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to the external request based on the target container network interface.
Specifically, referring to fig. 2, the user sends, to the API server 121 disposed at the control node 12, a Pod processing request or a service resource processing request through the Kubectl11, the Pod processing request including: the Pod creation request and the Pod update request, and the service resource processing request includes: a service resource creation request and a service resource update request. The admission control module 131 performs a listening operation on the API server 121, intercepts a Pod processing request when the Pod processing request is monitored (i.e., when a Pod creation request or a Pod update request is monitored), acquires a namespace to which a Pod corresponding to the Pod processing request belongs, performs a modification operation on the Pod processing request when the namespace to which the Pod belongs is marked as a preconfigured tag, and creates a service agent in the Pod based on the modified Pod processing request by the API server 121.
Taking Pod1 and Pod3 shown in fig. 2 and taking Pod processing requests as a Pod1 creation request and a Pod3 creation request respectively as an example for exemplary explanation, the admission control module 131 listens to the API controller 121 in real time, the user sends the Pod1 creation request and the Pod3 creation request to the API server 121 through the Kubectl11, and the admission control module 131 intercepts the two Pod creation requests (i.e., the Pod1 creation request and the Pod3 creation request) and checks whether the namespaces to which the Pod1 belongs and the namespaces to which the Pod3 belongs are marked as preconfigured tags (i.e., the tags corresponding to the service agent containers need to be injected). When the namespace to which Pod1 belongs is marked as a preconfigured label, the admission control module 131 modifies a Pod specification (hereinafter referred to as "Pod1 specification") corresponding to Pod1 to add a service agent container in a container list included in the Pod1 specification, and adjusts the configuration of Pod1 accordingly to form a modified Pod1 creation request from the modified Pod1 specification, returns the modified Pod1 creation request to the API server 121 to create Pod1 in the working node 14 based on the modified Pod1 creation request by the API server 121, and creates the service agent 1411 and the application 1412 in the Pod 1. When the namespace to which Pod3 belongs is not marked as a preconfigured label, the admission control module 131 does not intercept the Pod3 creation request, and the API server 121 creates Pod3 in the working node 14 based on the direct Pod3 creation request, and creates the application 1432 in Pod 3.
It should be noted that, the admission control module 131 performs a listening operation on the API server 121, and performs an intercepting operation on the Pod processing request based on a native mechanism of the Kubernetes cluster 10, and specifically: a hook script is created for the corresponding server by the admission control module 131 and a hook url is designed to receive the server's request by the hook url, and a message is sent to the configured hook url when an event (i.e., pod process request) occurs. The hook url is a service address monitored by the admission control module 131, is an API interface, and can receive information sent by the API server 121, where the format of the information is the native configuration information of Pod in the Kubernetes cluster 10, and the native configuration information includes one or any combination of several Pod names, labels and container lists. Illustratively, when the administrator configures the admission control module 131 in the Kubernetes cluster 10, the administrator designates a hook url (i.e., url address) of the admission control module 131, when a user initiates a Pod processing request, the API server 121 encapsulates the Pod processing request into an HTTP POST request, and sends the HTTP POST request to the hook url, the admission control module 131 listens to the hook url, and receives the HTTP POST request sent by the API server 121, after the admission control module 131 receives the HTTP POST request, the admission control module 131 designates a predefined logic, rule or policy to process the HTTP POST request (including verifying, parsing and modifying the request), the admission control module 131 generates an HTTP response (including a result of allowing or rejecting the request, and additional information or error information) according to the processing result, and sends the HTTP response back to the API server 121, after the API server 121 receives the HTTP response from the result in the HTTP response, if the HTTP POST request is allowed to pass, the HTTP POST processing operation corresponding to the Pod processing request is continued, and if the HTTP POST request is rejected to pass through the corresponding error information is returned to the API server 121.
The service management module 132 listens for service resource changes within the working node 14, that is, the service management module 132 performs a listening operation on the API server 121, and when a service resource processing request is monitored (that is, when a service resource creation request or a service resource update request is monitored), determines a backend network object corresponding to a service resource based on whether the service resource processing request carries a specified network annotation, and specifically: judging whether the service resource processing request carries a designated network annotation or not; if so, updating the back-end network object corresponding to the service resource into the appointed network object corresponding to the appointed network annotation; if not, the updating of the back-end network object corresponding to the service resource is refused, so that the back-end network object corresponding to the service resource is determined by the default network object. Meanwhile, when the service management module 132 monitors the service resource processing request, it acquires the configuration information corresponding to the service resource, and issues the configuration information to the service interface module 133 for determining whether the configuration information includes the custom forwarding annotation and is deployed in the cluster (i.e., kubernetes cluster 10).
Taking the service resource 15 shown in fig. 2 and taking a service resource processing request as a service resource 15 update request as an example for exemplary explanation, the service management module 132 monitors the resource change in the working node 14 in real time, that is, performs a monitoring operation on the API server 121, and the user sends the service resource 15 update request to the API server 121 through the Kubectl11, and the service management module 132 intercepts the service resource 15 update request to determine whether the service resource 15 update request carries a specified network annotation; if so, updating the back-end network object corresponding to the service resource 15 into the specified network object corresponding to the specified network annotation, so as to match the IP of the container network interface corresponding to the container (i.e. Pod) according to the specified network annotation of the service resource 15, and calling the API server 121 to update the back-end network object (i.e. endpoint object) of the service resource 15; if not, the update of the backend network object corresponding to the service resource 15 is refused, so that the backend network object corresponding to the service resource 15 is determined by the default network object. Meanwhile, when the service management module 132 monitors the update request of the service resource 15, it acquires the configuration information corresponding to the service resource 15, and issues the configuration information to the service interface module 133 for determining whether the configuration information includes the custom forwarding annotation and is deployed in the Kubernetes cluster 10.
The service interface module 133 determines whether the configuration information issued by the service management module 132 includes a self-determination to forward the annotation, and generates a flow agent opening instruction or a static routing instruction updating instruction according to the determination result, and issues the flow agent opening instruction to the service agent deployed in the Pod. The method comprises the following steps: if the configuration information contains a custom forwarding annotation, the service interface module 133 issues a flow agent opening instruction to the service agent, and the service agent opens a load balancing service to elect a target Pod from a plurality of pods corresponding to the service resource based on a polling algorithm, so that the target Pod directly responds to an external request; if the configuration information does not include the custom forwarding annotation, the service interface module 133 issues an update static routing instruction to the service agent, and the service agent determines a target container network interface matching the backend network object included in the configuration information from among a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so as to indirectly respond to the external request by the Pod of the nano-tube service agent based on the target container network interface. The service interface module 133 establishes a remote procedure call connection with the service agent through a remote procedure call protocol, and the service agent receives an open flow agent instruction and an update static routing instruction issued by the service interface module based on the remote procedure call connection.
Taking the container 2 (Pod 2), the service resource 15, and the service proxy 1421 deployed in the container 2 as illustrated in fig. 2 as an example, the service interface module 133 issues service information and traffic rules to the data plane by providing a set of standard data APIs, and the service interface module 133 listens to its internal ports (for example, 50051 ports) and provides a bidirectional streaming service using the RPC protocol (i.e., the remote procedure call protocol). The service interface module 133 interfaces and maintains configuration information corresponding to the service resources 15 issued by the service management module 132 and a client connection pool requested by the service proxy 1421 of the data plane. When the service interface module 133 receives the configuration information corresponding to the service resource 15 issued by the service management module 132, judging whether the configuration information contains a custom forwarding annotation; if yes (i.e. indicating that the CNI plug-in does not have the capability of Pod to access service resources), issuing a flow agent opening instruction to a service agent 1421 deployed in the container 2; if not (i.e., indicating that the CNI plug-in itself has Pod access to the service resource), an update static routing instruction is issued to the service agent 1421 deployed in container 2.
The service agent is a flow agent service of a data plane, deployed in Pod in a side-car mode, starts the service agent, and establishes an RPC connection (i.e., a remote procedure call connection) with the service interface module 133 of the control plane through a port of the management network VIP and the service interface module 133, and requests the interface of the service agent to continuously monitor data issued by the interface of the service interface module 133 (i.e., start a flow agent instruction and/or update a static routing instruction in the present application). If a command for opening the traffic agent is received, the service agent opens the load balancing service to select a target Pod from a plurality of pods corresponding to the service resource based on a polling algorithm, so that the target Pod directly responds to an external request, that is, the service agent opens the load balancer and the reverse proxy service to forward the traffic accessing the service resource and the traffic of the port corresponding to the service resource to the local load balancer and the reverse proxy service, the load balancer and the reverse proxy service select the target Pod from the plurality of pods corresponding to the service resource through the polling algorithm according to configuration information issued by the service interface module 133, and determine an IP address corresponding to the target Pod to realize forwarding of the traffic based on the IP address, that is, the target Pod directly responds to the external request without depending on the service resource, in other words, the external request is directly forwarded to the Pod without passing through the service resource. If an update static routing instruction is received, the service agent determines a target container network interface matched with a back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent directly responds to an external request based on the target container network interface, namely, the service agent determines the target container network interface from the plurality of container network interfaces corresponding to the Pod of the nano-tube service agent according to the back-end network object in the configuration information, creates a static route in a Pod network naming space, forwards traffic accessing service resources through the target container network interface, namely, the Pod of the nano-tube service agent relies on the service resources to indirectly respond to the external request, in other words, an access entry is provided for the Pod through the service resources, the traffic is forwarded between the service resources and the Pod through the target container network interface (namely, the container network interface matched with the back-end network object of the service resources), the external request is forwarded to the Pod through the service resources, and the external request is indirectly responded to the external request by the Pod based on the target container network interface; in general, the priority of the static route is higher than the priority of the default route, so that the purpose of forwarding the traffic through the target container network interface matched with the service resource is achieved.
Taking service proxy 1421 as an example, if service proxy 1421 receives a command for opening the flow proxy, service proxy 1421 opens the flow proxy service to open the load balancer and the reverse proxy service, and the load balancer and the reverse proxy service select a target Pod (taking Pod2 as an example) from multiple pods (namely Pod1, pod2, pod3 and Pod 4) corresponding to service resource 15 through a polling algorithm according to configuration information issued by service interface module 133, determine an IP address corresponding to Pod2 to realize forwarding of the flow based on the IP address, and directly respond to an external request based on the IP address corresponding to Pod2, that is, forward the Pod to the service resource and forward to another Pod to be directly forwarded to another Pod by Pod, so as to make up for the defect that some CNI plug-ins lack the Pod access service resource capability.
Referring to fig. 3, if the service proxy 1421 receives an update static routing instruction, the service proxy 1421 determines a target container network interface (i.e. eth 1) from a plurality of container network interfaces (i.e. eth1 and eth 0) corresponding to Pod2 according to the back-end network object 151 (which may also be understood as the back-end network object 151 corresponding to the service resource 15) in the configuration information, creates a static route in the Pod2 network namespace, and forwards the traffic accessing the service resource 15 through the target container network interface (i.e. eth 1), thereby achieving the purpose of forwarding the traffic by the target container network interface matched with the service resource 15. Similarly, if the service agent 1411 receives the update static route instruction, the service agent 1411 determines a target container network interface (i.e., eth 1) from a plurality of container network interfaces (i.e., eth1 and eth 0) corresponding to Pod1 according to the back-end network object 151, creates a static route in the Pod1 network namespace, and forwards the traffic accessing the service resource 15 through the target container network interface (i.e., eth 1).
In the application, a service agent is created in Pod corresponding to a name space to which a pre-configured label of a working node contained in a cluster belongs, the change of service resources in the working node is monitored, configuration information corresponding to the service resources is obtained, and whether the configuration information contains a custom forwarding annotation is judged; if yes, selecting a target Pod from a plurality of pods corresponding to the service resources based on a polling algorithm, so that the target Pod directly responds to an external request, and the defect that the CNI plug-in the prior art lacks the capacity of Pod to access the service resources is overcome; if not, determining a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to an external request based on the target container network interface, thereby realizing the access from the container network interface matched with the back-end network object of the service resource, and finally achieving the purpose of finely managing the flow of the Pod accessing the service resource.
Referring to fig. 4, based on the same inventive concept, this embodiment also discloses a container communication system 200 (hereinafter referred to as "system 200") of a multi-container network interface, where the system 200 includes: the acquisition module 201 and the execution module 202, the acquisition module 201 executes monitoring operation on an API server deployed on a control node contained in the cluster by an admission control module deployed on the cluster, creates a service agent in a Pod corresponding to a name space to which a preconfigured label of a working node contained in the cluster belongs, monitors changes of resources in the working node, acquires configuration information corresponding to the service resources, and judges whether the configuration information contains custom forwarding annotations; if yes, the execution module 202 selects a target Pod from a plurality of pods corresponding to the service resource based on the polling algorithm, so that the target Pod directly responds to the external request; if not, the execution module 202 determines a target container network interface matching the backend network object included in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so as to indirectly respond to the external request by the Pod of the nano-tube service agent based on the target container network interface.
It should be noted that, the logic included in the step S1 and the step S2 in the above-mentioned container communication method of a multi-container network interface is implemented by the acquisition module 201 in the container communication system 200 of a multi-container network interface, and the logic included in the step S3 and the step S4 in the above-mentioned container communication method of a multi-container network interface is implemented by the execution module 202 in the container communication system 200 of a multi-container network interface.
The above list of detailed descriptions is only specific to practical embodiments of the present application, and they are not intended to limit the scope of the present application, and all equivalent embodiments or modifications that do not depart from the spirit of the present application should be included in the scope of the present application.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.
Claims (10)
1. A method of container communication for a multi-container network interface, comprising:
an access control module deployed in a cluster executes monitoring operation on an API server deployed in a control node contained in the cluster, a service agent is created in a Pod corresponding to a name space to which a pre-configured label of a working node contained in the cluster belongs, the change of service resources in the working node is monitored, configuration information corresponding to the service resources is obtained, and whether the configuration information contains a custom forwarding annotation is judged;
if yes, selecting a target Pod from a plurality of pods corresponding to the service resources based on a polling algorithm, so that the target Pod directly responds to an external request;
if not, determining a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, and indirectly responding to an external request by the Pod of the nano-tube service agent based on the target container network interface.
2. The container communication method of a multi-container network interface of claim 1, wherein creating a service agent within the Pod comprises:
and the access control module executes monitoring operation on the API server, intercepts the Pod processing request when the Pod processing request is monitored, acquires a name space to which the Pod corresponding to the Pod processing request belongs, executes modification operation on the Pod processing request when the name space to which the Pod belongs is marked as a pre-configured label, and creates a service agent in the Pod based on the modified Pod processing request by the API server.
3. The container communication method of a multi-container network interface of claim 2, wherein the service agent is deployed in a sidecar mode within a Pod;
the performing a modification operation on the Pod processing request includes:
and adding a service agent container into a container list contained in the specification corresponding to the Pod to form a modified Pod processing request.
4. The container communication method of claim 2, wherein the change of service resources in the listening operation node is implemented by a service management module deployed in the cluster performing a listening operation on the API server, and the service management module obtains configuration information corresponding to the service resources.
5. The method of container communication for a multi-container network interface as recited in claim 4, further comprising:
and the service management module executes monitoring operation on the API server, and determines a back-end network object corresponding to the service resource based on whether the service resource processing request carries the designated network annotation when the service resource processing request is monitored.
6. The container communication method of claim 5, wherein determining a backend network object corresponding to a service resource based on whether the service resource processing request carries a specified network annotation comprises:
judging whether the service resource processing request carries a designated network annotation or not;
if so, updating the back-end network object corresponding to the service resource into the appointed network object corresponding to the appointed network annotation;
if not, the updating of the back-end network object corresponding to the service resource is refused, so that the back-end network object corresponding to the service resource is determined by the default network object.
7. The method for container communication of a multi-container network interface of claim 4, further comprising, after the service management module obtains the configuration information corresponding to the service resource:
the service management module issues the configuration information to a service interface module for judging whether the configuration information contains a custom forwarding annotation and is deployed in a cluster, and the service interface module generates a flow agent starting instruction or a static routing updating instruction according to a judging result and issues the flow agent starting instruction or the static routing updating instruction to a service agent deployed in a Pod.
8. The method for container communication of a multi-container network interface of claim 7 wherein,
if the configuration information contains a custom forwarding annotation, the service interface module issues a flow agent opening instruction to the service agent, and the service agent opens a load balancing service to elect a target Pod from a plurality of pods corresponding to the service resource based on a polling algorithm, so that the target Pod directly responds to an external request;
if the configuration information does not contain the custom forwarding annotation, the service interface module issues an update static routing instruction to the service agent, and the service agent determines a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to an external request based on the target container network interface.
9. The method of claim 8, wherein a remote procedure call connection is established between the service interface module and the service agent via a remote procedure call protocol, and the service agent receives the open flow agent instruction and the update static routing instruction issued by the service interface module based on the remote procedure call connection.
10. A container communication system of a multi-container network interface, comprising:
the acquisition module is used for executing monitoring operation on an API server deployed on a control node contained in the cluster by an access control module deployed on the cluster, creating a service agent in a Pod corresponding to a name space to which a pre-configured label of a working node contained in the cluster belongs, monitoring the change of service resources in the working node, acquiring configuration information corresponding to the service resources, and judging whether the configuration information contains a custom forwarding annotation;
the execution module is used for selecting a target Pod from a plurality of pods corresponding to the service resources based on a polling algorithm if the target Pod is in the service resource, so that the target Pod directly responds to an external request; if not, the execution module determines a target container network interface matched with the back-end network object contained in the configuration information from a plurality of container network interfaces corresponding to the Pod of the nano-tube service agent, so that the Pod of the nano-tube service agent indirectly responds to an external request based on the target container network interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310904443.2A CN116633775B (en) | 2023-07-24 | 2023-07-24 | Container communication method and system of multi-container network interface |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310904443.2A CN116633775B (en) | 2023-07-24 | 2023-07-24 | Container communication method and system of multi-container network interface |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116633775A true CN116633775A (en) | 2023-08-22 |
CN116633775B CN116633775B (en) | 2023-12-19 |
Family
ID=87603007
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310904443.2A Active CN116633775B (en) | 2023-07-24 | 2023-07-24 | Container communication method and system of multi-container network interface |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116633775B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117081959A (en) * | 2023-10-17 | 2023-11-17 | 明阳产业技术研究院(沈阳)有限公司 | Network connectivity monitoring and recovering method, system, medium and equipment |
CN117369946A (en) * | 2023-10-18 | 2024-01-09 | 中科驭数(北京)科技有限公司 | Container deployment method and device based on DPU, electronic equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110198231A (en) * | 2018-05-08 | 2019-09-03 | 腾讯科技(深圳)有限公司 | Capacitor network management method and system and middleware for multi-tenant |
CN114422351A (en) * | 2019-03-29 | 2022-04-29 | 瞻博网络公司 | Configuring a service load balancer with a designated back-end virtual network |
CN115665231A (en) * | 2022-10-25 | 2023-01-31 | 中国电信股份有限公司 | Service creation method, device and computer-readable storage medium |
CN115987872A (en) * | 2022-12-21 | 2023-04-18 | 北京中电普华信息技术有限公司 | Cloud system based on resource routing |
CN116016448A (en) * | 2022-11-30 | 2023-04-25 | 上海浦东发展银行股份有限公司 | Service network access method, device, equipment and storage medium |
-
2023
- 2023-07-24 CN CN202310904443.2A patent/CN116633775B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110198231A (en) * | 2018-05-08 | 2019-09-03 | 腾讯科技(深圳)有限公司 | Capacitor network management method and system and middleware for multi-tenant |
CN114422351A (en) * | 2019-03-29 | 2022-04-29 | 瞻博网络公司 | Configuring a service load balancer with a designated back-end virtual network |
CN115665231A (en) * | 2022-10-25 | 2023-01-31 | 中国电信股份有限公司 | Service creation method, device and computer-readable storage medium |
CN116016448A (en) * | 2022-11-30 | 2023-04-25 | 上海浦东发展银行股份有限公司 | Service network access method, device, equipment and storage medium |
CN115987872A (en) * | 2022-12-21 | 2023-04-18 | 北京中电普华信息技术有限公司 | Cloud system based on resource routing |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117081959A (en) * | 2023-10-17 | 2023-11-17 | 明阳产业技术研究院(沈阳)有限公司 | Network connectivity monitoring and recovering method, system, medium and equipment |
CN117081959B (en) * | 2023-10-17 | 2023-12-22 | 明阳产业技术研究院(沈阳)有限公司 | Network connectivity monitoring and recovering method, system, medium and equipment |
CN117369946A (en) * | 2023-10-18 | 2024-01-09 | 中科驭数(北京)科技有限公司 | Container deployment method and device based on DPU, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN116633775B (en) | 2023-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11706102B2 (en) | Dynamically deployable self configuring distributed network management system | |
US10708376B2 (en) | Message bus service directory | |
CN116633775B (en) | Container communication method and system of multi-container network interface | |
WO2020147466A1 (en) | Method for invoking server and proxy server | |
US7257817B2 (en) | Virtual network with adaptive dispatcher | |
US7899047B2 (en) | Virtual network with adaptive dispatcher | |
CN106850324B (en) | Virtual network interface object | |
US11108653B2 (en) | Network service management method, related apparatus, and system | |
US20090150565A1 (en) | SOA infrastructure for application sensitive routing of web services | |
US10819659B2 (en) | Direct replying actions in SDN switches | |
EP4270204A1 (en) | Multi-cloud interface adaptation method and system based on micro-service, and storage medium | |
CN111258627A (en) | Interface document generation method and device | |
CN115086176B (en) | System for realizing dynamic issuing of service administration strategy based on spring cloud micro-service technology | |
US20110035477A1 (en) | Network clustering technology | |
WO2012119340A1 (en) | Method and apparatus for implementing north interface | |
CN113923122A (en) | Deriving network device and host connections | |
US7805733B2 (en) | Software implementation of hardware platform interface | |
CN115412549A (en) | Information configuration method and device and request processing method and device | |
CN114945023B (en) | Network connection multiplexing method, device, equipment and medium | |
EP4311280A1 (en) | Communication method and device | |
Li et al. | Design of General SDN Controller System Framework for Multi-domain Heterogeneous Networks | |
CN115378993A (en) | Method and system for service registration and discovery supporting namespace awareness | |
CN118524149A (en) | Micro-service management method, device, terminal equipment and storage medium | |
CN116546019A (en) | Traffic management method, device, equipment and medium based on service grid | |
CN116069753A (en) | Deposit calculation separation method, system, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |