CN117459468A - Method and system for processing service traffic in multi-CNI container network - Google Patents

Method and system for processing service traffic in multi-CNI container network Download PDF

Info

Publication number
CN117459468A
CN117459468A CN202311435191.XA CN202311435191A CN117459468A CN 117459468 A CN117459468 A CN 117459468A CN 202311435191 A CN202311435191 A CN 202311435191A CN 117459468 A CN117459468 A CN 117459468A
Authority
CN
China
Prior art keywords
network
cni
container
traffic
dpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311435191.XA
Other languages
Chinese (zh)
Inventor
雷冬
黄明亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202311435191.XA priority Critical patent/CN117459468A/en
Publication of CN117459468A publication Critical patent/CN117459468A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/645Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method and a system for processing service traffic in a multi-CNI container network, wherein the multi-CNI container network comprises a first CNI network for bearing service management traffic and a second CNI network for bearing service data traffic, and the method comprises the following steps: the first CNI network transmits the service management flow generated by each container group to a kernel protocol stack of a Host side of the multi-CNI container network through a first type network card, and forwards the service management flow according to a preset forwarding strategy; the second CNI network unloads the business data flow generated by each container group to the DPU equipment through the second type network card, and issues the flow table to the DPU equipment; the DPU device forwards the service data traffic by using a pre-deployed traffic forwarding plane and a flow table. The invention can solve the problem of consuming a large amount of server CPU when processing the traffic of the multi-CNI container network.

Description

Method and system for processing service traffic in multi-CNI container network
Technical Field
The present invention relates to the field of multi-CNI network traffic processing technologies, and in particular, to a method and a system for processing traffic in a multi-CNI container network.
Background
Container (Container) virtualization technology refers to running multiple containers on one physical host, each Container having an independent running environment, there being multiple Container network interfaces (Container Network Interface, CNI). A multi-CNI container network based on container virtualization technology refers to deploying multiple CNIs in one container cloud to provide different container network functions, and constructing a cloud server cluster. And deploying cloud service products realized by the container service on the cluster server through a docker technology. The container cloud is a cloud delivery model of the PaaS layer. The container cloud may be deployed in two ways: one is to deploy containers on virtual machines (in many traditional enterprises, containers are deployed on virtual machines); another approach is to deploy the container directly on the bare computer server. Kubernetes (K8 s for short) is a portable and extensible open source platform for managing containerized workload and services, and for building K8s clusters (one type of cloud server clusters), which can facilitate declarative configuration and automation. Among them, CNI is at the earliest a container network specification initiated by CoreOS, which is the basis of Kubernetes network plug-in. The basic idea is as follows: container Runtime when creating a container, it creates network namespace first, then calls the CNI plug-in to configure the network for this netns, and then starts the processes within the container. The cloud native computing foundation (Cloud Native Computing Foundation, CNCF) has been added, becoming the network model of CNCF initiative.
In the current container virtualization technology, due to the service requirement of the containers in the cluster, a plurality of CNI networks as shown in fig. 2 are generally adopted to provide a plurality of network planes isolated from each other for the container group (Pod of Containers, abbreviated as POD), and different service flows do not affect each other, so that the management flow and the service flow can be separated, and the management network is not affected due to network congestion caused when the service flow increases to occupy a large amount of bandwidth. For example, it is currently common to use a flat as a default CNI to provide default management network support for K8s networks, and use a mainstream CNI such as Calico, MAC vlan or SRIOV CNI as a second CNI to provide a data network for traffic. Wherein, POD is the minimum deployment and management basic unit in Kubernetes cluster, co-addressing, co-scheduling.
The current multi-CNI scheme can provide network isolation for different services and ensure that the services with different requirements can work on independent network planes, but each CNI uses a kernel protocol stack to perform network forwarding. Currently, whether a public cloud server or a private cloud server, network processing occupies a large amount of CPU resources, and the CPU resources dedicated to processing core business logic are preempted. Providing network isolation for cloud server clusters (including K8s clusters) through multiple CNIs actually adding additional consumption of cloud server CPU resources.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and system for processing traffic in a multi-CNI container network, which obviates or mitigates one or more of the drawbacks of the prior art.
In one aspect, the present invention provides a method for processing traffic in a multi-CNI container network, where the multi-CNI container network includes a first CNI network for carrying traffic management traffic and a second CNI network for carrying traffic data traffic generated by processing traffic by container groups, and each container group included in the multi-CNI container network includes a first type network card corresponding to the first CNI network and a second type network card corresponding to the second CNI network, and the method includes the following steps:
the first CNI network transmits the service management flow generated by each container group to a kernel protocol stack at the Host side of the multi-CNI container network through a first type network card, and the kernel protocol stack forwards the service management flow according to a preset forwarding strategy;
the second CNI network unloads the business data flow generated by each container group to DPU equipment connected with the second CNI network through a second type network card, and a flow table for guiding flow forwarding is issued to the DPU equipment; a traffic forwarding plane for forwarding traffic data traffic is pre-deployed on the DPU device;
the DPU device uses a pre-deployed traffic forwarding plane for forwarding traffic data traffic, and forwards traffic data traffic offloaded from a second CNI network according to the flow table from the second CNI network.
In some embodiments of the present invention, the multi-CNI container network is created by a network orchestration tool for cloud native container virtualization, the network orchestration tool is used to build a container virtualization cluster by installing and deploying the network orchestration tool on a control node of a cloud server cluster and a cloud server node, and a multi-CNI support component is pre-deployed in the network orchestration tool so that the network orchestration tool supports a plurality of CNIs.
In some embodiments of the present invention, an interface detection plug-in is pre-deployed on the cloud proto-container virtualization network orchestration tool, so that the second CNI network can discover a physical function interface and/or a virtual function interface on a server node, and the virtual function interface based on single root I/O virtualization is implemented through the interface detection plug-in.
In some embodiments of the present invention, the cloud proto-container virtualization network orchestration tool is Kubernetes, the multi-CNI support component is Multus-CNI, and the method further comprises: the Calico CNI and ovn-Kubernetes CNI are deployed based on the Kubernetes of installation deployment, the Calico CNI is used for bearing the first CNI network, and the ovn-Kubernetes CNI plug-in is used for bearing the second CNI network; wherein the Calico CNI is set to the default CNI of Kubernetes.
In some embodiments of the invention, the method further comprises: the ovn-kubernetes CNI component pre-deployed on the control nodes of the cloud server cluster comprises ovs, ovnkube-master, ovnkube-db and ovnkube nodes in a complete mode; ovn-kubernetes CNI components on the Host side of the worker nodes of the cloud server cluster include OVN-K8s-CNI-Overlay, and ovnkube nodes.
In some embodiments of the present invention, the first type network card is a virtual network card interface, and the second type network card is a virtual function interface supporting single root I/O virtualization.
In some embodiments of the invention, the method further comprises: a traffic forwarding plane for forwarding traffic data traffic is pre-deployed in a system on a chip of the DPU device.
In some embodiments of the invention, the method further comprises: a second CNI network control component for supporting flow table issuing and a second CNI network node configuration component for setting interface configuration are deployed in a system on chip of the DPU device in advance by using an application container engine; the server and the DPU device are connected to a multi-CNI container network through a switch and/or an optical switch.
In some embodiments of the present invention, when a plurality of DPU devices are connected to the second CNI network, a data connection is established between different DPU devices through a switch or an optical switch, and a data connection is established between different container groups within the same DPU device.
In another aspect, the present invention provides a system for processing traffic in a multi-CNI container network, comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method according to any of the above embodiments when the computer instructions are executed by the processor.
According to the method and the system for processing the service flow in the multi-CNI container network, the DPU equipment is connected with the second CNI network, the service data flow is forwarded on the pre-deployed flow forwarding surface of the DPU equipment based on the received flow table for guiding flow forwarding, the service data flow originally deployed on the Host side of the multi-CNI container network, namely the cloud server node can be forwarded, the network data packet is efficiently processed through the DPU equipment special for data processing, the pressure of the server CPU is effectively reduced, the problem that the server CPU is consumed in a large amount when the flow of the multi-CNI container network is processed is solved, the service management flow is reserved for the server CPU to process, and the server CPU can be more focused on processing real service logic. Meanwhile, the invention realizes the isolation effect of the multi-CNI container network on the management network carrying the service management flow and the data network carrying the service data flow, and obviously improves the CPU resource utilization efficiency of the cloud server node.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate and together with the description serve to explain the invention. In the drawings:
fig. 1 is a flowchart of a method for processing traffic in a multi-CNI container network according to an embodiment of the present invention.
Fig. 2 is a method for processing plane traffic by using a multi-CNI container network in the prior art.
Fig. 3 is a method for processing plane traffic by a multi-CNI container network according to an embodiment of the present invention.
Fig. 4 is a method for processing plane traffic by using a multi-CNI container network built based on K8s according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a multi-CNI container network built based on K8s on the Host side in a specific embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a container group and DPU equipment on the Host side of a multi-CNI container network built based on K8s in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
It is also noted herein that the term "coupled" may refer to not only a direct connection, but also an indirect connection in which an intermediate is present, unless otherwise specified.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
In a network, each host, router and server has a network layer that can be broken down into two interacting parts, a data plane and a control plane, the data plane referring to the functions of the respective host, router or server in the network layer, determining how datagrams (network packets) arriving at one of the input links of the respective host, router or server are forwarded to one of the output links of the router. In the present invention, the problems faced on the server side are: the large amount of traffic data traffic generated by the multi-CNI container network occupies a large amount of server CPU resources. The server may be a real server or a cloud server. The invention aims to provide a method for processing business data traffic carried by a multi-CNI container network to relieve the problem of occupation of a large amount of CPU on a server side.
A data center processor (Data Processing Unit, abbreviated as DPU), which is a large class of special purpose processors that have been developed recently, is the third important computational power chip in the data center scene, following the CPU, GPU, to provide a computational engine for high bandwidth, low latency, data intensive computing scenes. The system is a new generation computing chip which takes data as a center, is I/O intensive, adopts a software definition technology route to support the virtualization of an infrastructure resource layer, and has the advantages of improving the efficiency of a computing system, reducing the total ownership cost of the whole system, improving the data processing efficiency and reducing the performance loss of other computing chips.
Kubernetes, also known as K8s, is an open source system for automatically deploying, scaling and managing containerized applications. It combines the containers that make up the application into a logical unit to facilitate management and service discovery. Kubernetes originated from the operational experience of the Google15 year production environment, while aggregating the best creatives and practices of the community. K8s is typically used to build a multi-CNI container network. Linux containers provide a lightweight virtualization method that can run multiple virtual environments (containers) simultaneously on a single host. In technologies such as XEN or KVM, the processor simulates the entire hardware environment and the hypervisor controls the virtual machine. Unlike this, the container provides virtualization at the operating system level, where the kernel controls the isolated container. Kubernetes possess a large and rapidly growing ecology, and its service, support and tool uses are quite extensive. The name Kubernetes originates in greek and means "rudders" or "pilots". The abbreviation k8s is due to the eight character relationship between k and s. Google was open to Kubernetes project in 2014. Kubernetes builds on Google's experience with large scale production workload for decades, combining the most excellent ideas and practices in communities. There are some of the most basic core network requirements in Kubernetes cluster networks such as: each network interface on Pod must have its own unique IP; pod should have the ability to communicate with any other Pod in the Kubernetes cluster without NAT; agents (e.g., system daemons, kubelets) on a node can communicate with all PODs on that node. Kubernetes does not, however, in itself address the above-described core network requirements with built-in tools or components, but instead provides the above-described core network capabilities through a network overlay plug-in compliant with the Container Network Interface (CNI) specification. Known CNIs, such as Flannel, calico, etc., may provide the kubernetes cluster network with the capabilities required by the core network.
The CNI container network interface (Container Network Interface), abbreviated as CNI, is an item under CNCF flag, and is composed of a set of specifications and libraries for configuring network interfaces of Linux containers, and also contains some plug-ins. CNI only concerns network allocation at the time of container creation and releases network resources when a container is deleted.
OVS (Open vSwitch), working to provide a functional virtual switch for Linux-based Hypervisors. As a virtual switch, it supports port mirroring, VLAN, and some other network monitoring protocols.
In order to solve the problem of consumption of CPU on the Host side (namely, the server side) by the traffic of the network plane of the multi-CNI container network constructed in the prior art, the invention provides a method for processing the traffic in the multi-CNI container network, which is realized by unloading large-scale traffic data traffic to DPU equipment, wherein the DPU equipment and the server are connected to the same network plane through a switch, and the proposed traffic unloading strategy is realized through an internally pre-deployed program.
Fig. 1 is a flowchart of a method for processing traffic in a multi-CNI container network according to an embodiment of the present invention, where the CNI container network includes a first CNI network for carrying traffic management traffic and a second CNI network for carrying traffic data traffic generated by processing traffic by container groups, and each container group included in the multi-CNI container network includes a first type network card corresponding to the first CNI network and a second type network card corresponding to the second CNI network, and the method includes the following steps:
step S110: the first CNI network transmits the service management flow generated by each container group to a kernel protocol stack at the Host side of the multi-CNI container network through a first type network card, and the kernel protocol stack forwards the service management flow according to a preset forwarding strategy.
Wherein the set of containers (POD, pod of Containers) contains one or more containers and the resources for managing these containers are an abstract set of one or a set of services (processes). Network and storage may be shared among PODs (which may be understood simply as a logical virtual machine, but is not a virtual machine). In some embodiments of the present invention, each CNI network represents a network plane, the first CNI network is a control plane for carrying control plane traffic, the second CNI network is a data plane for carrying data plane traffic, and the volume of the data plane traffic is much larger than the volume of the control plane traffic.
In the implementation process, the Host side of the multi-CNI container network refers to a physical Host on which the container operates, and the CNI plug-in operates on the Host side and is responsible for creating and configuring a network interface for the container. In the cloud server cluster operation environment, the Host side is a server side, the corresponding other side is a DPU computing card side, the DPU computing card is used for assisting a server in carrying out flow forwarding, and is mainly responsible for solving the problem of large service data flow of data flow, the DPU computing card is connected to a second CNI network through a switch or an optical switch, and the second CNI network is located in the cloud server cluster. In cloud computing, the Host side is typically used to refer to a physical device provided by a cloud service provider.
Step S120: the second CNI network unloads the business data flow generated by each container group to DPU equipment connected with the second CNI network through a second type network card, and a flow table for guiding flow forwarding is issued to the DPU equipment; the DPU device is pre-deployed with a traffic forwarding plane for forwarding traffic data traffic.
In the implementation process, the DPU device and the server on the Host side are commonly connected to a second CNI network to be offloaded through a switch. Because different CNI networks can process different services, more than one second CNI network can be used for processing service data traffic, different functions can be realized through different second CNI networks, and traffic unloading principles of a plurality of second CNI networks for service processing are consistent. Forwarding of the flow table is usually performed directly by means of a switch, the hardware of the switch performs flow table forwarding, based on the Openflow technology, the traffic data traffic from the second CNI network will simultaneously generate a flow table on the switch when arriving at the switch, the flow table arrives with the traffic data traffic at a DPU device connected to the second CNI network, and the DPU device forwards the traffic data traffic based on the flow table and pre-deployed ovs.
Step S130: the DPU device uses a pre-deployed traffic forwarding plane for forwarding traffic data traffic, and forwards traffic data traffic offloaded from a second CNI network according to the flow table from the second CNI network.
In a specific implementation process, a traffic forwarding plane for forwarding traffic data traffic can be pre-deployed on the DPU device through an OVS plug-in. The plug-in implementation is merely an example, and the present invention is not limited thereto.
According to the processing method for the service flow in the multi-CNI container network, the DPU equipment is connected with the second CNI network, the service data flow is forwarded on the pre-deployed flow forwarding surface of the DPU equipment based on the received flow table used for guiding flow forwarding, the service data flow originally deployed on the Host side of the multi-CNI container network, namely the cloud server node can be forwarded, the network data packet is efficiently processed through the DPU equipment special for data processing, the pressure of a server CPU is effectively reduced, the problem that a large amount of server CPUs are consumed when the flow of the multi-CNI container network is processed is solved, the service management flow is reserved for the server CPU to process, and the server CPU can be more focused on processing real service logic. Meanwhile, the invention realizes the isolation effect of the multi-CNI container network on the management network carrying the service management flow and the data network carrying the service data flow, and obviously improves the CPU resource utilization efficiency of the cloud server node.
As shown in fig. 3, the present invention adds DPU equipment on the basis of fig. 2, deploys OVS on a DPU SoC (system on a chip) to forward the offloaded traffic data traffic, and adaptively adjusts the interfaces to be a veth interface and a vf interface.
In a specific implementation process, a multi-CNI container network needs to be built in advance, and the network configuration tool for cloud primary container virtualization can be used for creation, the network configuration tool is installed and deployed on a control node of a cloud server cluster and a cloud server node, the container virtualization cluster is built by the network configuration tool, and a multi-CNI supporting component is deployed in the network configuration tool in advance, so that the network configuration tool supports a plurality of CNIs. For example, kubernetes are installed and deployed on a Maste node of a cloud server cluster and a cloud server node, a component container virtualizes the cluster, a control node is called a Master node, and a working node is called a workbench node.
In some embodiments of the present invention, an interface detection plug-in is further deployed in advance on the cloud proto-container virtualization network orchestration tool, so that the second CNI network can discover a physical function interface and/or a virtual function interface on a server node, and the virtual function interface based on single-root I/O virtualization is implemented through the interface detection plug-in. For example, the plug-in used by the interface detection plug-in the Kubernetes scenario is an SR-IOV Network Device Plugin plug-in, so that ovn-Kubernetes CNI can discover a physical Function interface (Physical Function, PF) and/or a Virtual Function interface (VF) on a server node, and the SR-IOV Network Device Plugin plug-in is further used to provide the capability of an SR-IOV VF interface. SR-IOV Virtual Function (VF) PCI Express (PCIe) Virtual Function (VF) is a lightweight PCIe Function on a network adapter that supports single root I/O virtualization (SR-IOV).
By adopting the embodiment, the multi-CNI container network can detect the accessed socket, identify the accessed physical function interface and virtual function interface, and ensure that the function proposed by the scheme can be smoothly realized.
In a specific implementation process, the cloud proto-container virtualization network orchestration tool is Kubernetes, the multi-CNI support component is Multus-CNI, and the method further comprises: (1) The Calico CNI and ovn-Kubernetes CNI are deployed based on the Kubernetes of installation deployment, the Calico CNI is used for bearing the first CNI network, and the ovn-Kubernetes CNI plug-in is used for bearing the second CNI network; (2) Wherein the Calico CNI is set to the default CNI of Kubernetes. The Calico is a set of container cluster network CNI plug-in units with open sources, and is used for forwarding network traffic based on a Linux kernel protocol stack, providing IP allocation for containers in a cloud primary virtualization cluster, providing basic network support such as network connection among the containers, virtual machines and hosts, and being applicable to PaaS or IaaS platforms such as kubernetes, openShift, dockerEE, openStrack. Noun interpretation: paaS (Platform-as-a-service), iaaS (Infrastructure-as-a-service), saaS (Software-as-a-service). OVN-Kubernetes network plug-in is an open source, fully functional Kubernetes CNI plug-in that uses Open Virtual Network (OVN) to manage network traffic, which provides an overlay-based network implementation. An Open VSwitch (OVS) is run on each node using a cluster of OVN-Kubernetes plug-ins. OVN configures OVSs on each node to implement the declared network configuration. OVN-Kubernetes' series of daemons to convert container cluster virtual network configuration into OpenFlow rules. OpenFlow is a protocol for communicating with network switches and routers that provides a method for remotely controlling network traffic flows on network devices, enabling network administrators to configure, manage, and monitor the flows of network traffic.
In some embodiments of the invention, the method further comprises: the ovn-kubernetes CNI component pre-deployed on the control nodes of the cloud server cluster comprises ovs, ovnkube-master, ovnkube-db and ovnkube nodes in a complete mode; ovn-kubernetes CNI components on the Host side of the worker node of the cloud server cluster include OVN-K8s-CNI-Overlay, and ovnkube node (ovnkube-node). Based on the pre-arrangement of the plugins as above, traffic data traffic of the second CNI network can be offloaded to the DPU device and forwarded.
In particular, multus-CNI is a plug-in that enables multiple CNIs to be supported simultaneously in a kubernetes container cluster. With the development, kubernetes lacks support for the required functionality of multiple network interfaces in virtualized networks. Traditionally, networks may use multiple network interfaces to separate the network planes that control, manage and control the user/data. They are also used to support different protocols, meeting different regulatory and configuration requirements. To address this need, intel implements the CNI plug-in of multi, where functionality is provided to add multiple interfaces to Pod. This allows the POD to connect to multiple networks through different interfaces, and each interface will use its own CNI plug-in (the CNI plug-ins may be the same or different, depending entirely on the requirements and implementation). SRIOV network device plugin plug-in, SR-IOV network device plug-in expands the functions of Kubernetes, solving the high performance network I/O problem by discovering and publishing SR-IOV network physical function interfaces (PF, physical functions), virtual function interfaces (VF, virtual functions) and auxiliary network devices, particularly sub-function interfaces (SF).
In a specific embodiment of the present invention, the first type network card is a virtual network card interface, and the second type network card is a virtual function interface supporting single root I/O virtualization. The invention is not limited thereto, and the specific types of network card interfaces above are merely examples.
In yet another embodiment of the present invention, the method further comprises: a traffic forwarding plane for forwarding traffic data traffic is pre-deployed in a system on a chip of the DPU device. Typically, this step is implemented by pre-deploying OVSs as forwarding planes of ovn-kubernetes CNI on a system on chip (SoC) of the DPU device. The invention is not limited thereto and the above OVS insert is only an example.
In some embodiments of the invention, the method further comprises: a second CNI network control component for supporting flow table issuing and a second CNI network node configuration component for setting interface configuration are deployed in a system on chip of the DPU device in advance by using an application container engine; the server and the DPU device are connected to a multi-CNI container network through a switch and/or an optical switch. In a specific implementation process, ovn-kubernetes CNI supporting the second CNI network is deployed ovn a controller (ovn-controller) on a system on a chip of a DPU device using an application container engine (Docker), and an ovnkube node component is configured to issue an openflow table and ovs interface configuration for the ovs forwarding plane. The ovs interface configuration refers to configuring the PF and VF.
In some embodiments of the present invention, when a plurality of DPU devices are connected to the second CNI network, data connection is established between different DPU devices through a switch or an optical switch, and data connection is established between different container groups in the same DPU device. A switch is a network device that connects different devices in a network and forwards packets to the correct destination device based on the destination address in the packet. The switch may transmit data using an electrical signal or an optical signal, wherein the switch that transmits data using an optical signal is an optical switch that employs an optical fiber as a transmission medium.
Specifically, taking K8s as a cloud primary container virtualization network orchestration tool as an example, and taking Calico and ovn-kubernetes as CNI plug-ins used by a container multi-CNI to specifically describe how to implement offloading of CNI occupying more CPU in the multi-CNI to the DPU.
In the present technology center, both the forwarding plane of the Calico and the forwarding plane of the ovn-kubernetes are deployed on the server node, in this embodiment, we offload the forwarding plane of the second CNI ovn-kubernetes carrying a large flow of data onto the DPU Soc, and the dedicated DPU data processor processes the data flow, so as to achieve the purpose of releasing a large amount of data and saving the CPU on the host side of the server. It should be noted that the forwarding plane, the service plane, and the control plane are three main functional components of the router architecture. The forwarding plane determines what kind of processing is performed on the data packet flowing into the interface, for example, according to the destination IP address and other attribute parameters in the packet, a proper routing path is selected according to the routing table for forwarding; the service surface is responsible for the calculation and processing of user service data and is mainly completed by a router CPU; the control plane is used for realizing interconnection of network topology structures, and mainly controls which route is to enter the corresponding protocol routing table by various routing protocols.
In order to implement the flow offloading scheme proposed by the present invention, the required preparation steps include:
(1) And installing and deploying the Kubernetes on the Master control node and the cloud server node of the cloud server cluster, and virtualizing the cluster by the component container.
(2) Calico CNI is deployed based on Kubernetes as K8s default CNI.
(3) The Multus-CNI is deployed, so that the K8s cluster has the capability of supporting a plurality of CNIs.
(4) Deploying ovn-Kubernetes CNI based on Kubernetes as a K8s second CNI; the ovn-kubernetes CNI components which need to be deployed on the master node comprise ovs, ovnkube-master, ovnkube-db and ovnkube node (full mode); ovn-kubernetes CNI components that need to be deployed on the host side of a worker node include OVN-K8s-CNI-Overlay, ovnkube node (dpu-host mode).
(5) The SR-IOV Network Device Plugin plug-in is deployed, so that ovn-kubernetes have the capability of discovering PF/VF interface resources on a server node and providing SRIOV VF interfaces for Pod.
(6) OVSs are deployed on DPU SoC (system on chip) of the worker node as forwarding planes for ovn-kubernetes CNI.
(7) A docker deployment ovn controller (ovn-controller) is used on the DPU SoC, an ovnkube node component for issuing openflow tables and ovs interface configurations for the ovs forwarding plane.
(8) The method comprises the steps of deploying a service Pod in a kubernetes cluster, setting v1.mu-CNI-io/default-network to kube-system/Calico-network in an options field of a yaml configuration file defining the Pod, designating the default network to be used as a Calico CNI, setting ku8 s.v1.cni.cncf.io/networks to be-kube-system/ovn-network@eth1 of the settings, and indicating that a double CNI network is used and a second CNI to be ovn-kubernetes network.
(9) And after the service container group (Pod) is established, each container group is provided with two network cards, and the two network cards correspond to the Calico CNI network (namely a first CNI network) and the ovn-kuubertenes CNI network (namely a second CNI network) respectively. The Calico CNI network card is a veth interface, the flow data enters a kernel protocol stack of host after being sent out of Pod, and flow forwarding is carried out according to the iptables strategy and the routing strategy of Calico; the ovn-kubernetes network card is an SRIOV VF interface, the flow data enters a DPU Soc side after being sent out of the Pod, enters a ovs forwarding plane from the VF representator, and is sent to a ovs flow table according to ovn to perform corresponding data flow forwarding, so that CPU resources of a Host side (namely a server side) are not occupied.
As shown in fig. 4, fig. 5, and fig. 6, in the K8s container cluster, the POD has multiple CNI networks (multiple CNIs provide different networks for the same POD), and uses the Calico CNI as the first CNI, and carries the traffic management network; and using ovn-kubernetes CNI as a second CNI to carry the data large-flow network generated by the service. The DPU Soc (System on Chip) is an operating System running on the DPU network card. Fig. 5 and 6 are enlarged partial detail views of fig. 4.
The method accordingly also provides a system for processing traffic in a multi-CNI container network, the system comprising a computer device comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method as described above when the computer instructions are executed by the processor.
Embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as described above. The computer readable storage medium may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disks, hard disk, a removable memory disk, a CD-ROM, or any other form of storage medium known in the art.
According to the method and the system for processing the service flow in the multi-CNI container network, the DPU equipment is connected with the second CNI network, the service data flow is forwarded on the pre-deployed flow forwarding surface of the DPU equipment based on the received flow table for guiding flow forwarding, the service data flow originally deployed on the Host side of the multi-CNI container network, namely the cloud server node can be forwarded, the network data packet is efficiently processed through the DPU equipment special for data processing, the pressure of the server CPU is effectively reduced, the problem that the server CPU is consumed in a large amount when the flow of the multi-CNI container network is processed is solved, the service management flow is reserved for the server CPU to process, and the server CPU can be more focused on processing real service logic. Meanwhile, the invention realizes the isolation effect of the multi-CNI container network on the management network carrying the service management flow and the data network carrying the service data flow, and obviously improves the CPU resource utilization efficiency of the cloud server node. In addition, the invention enumerates the realization mode of the flow unloading path and plug-ins which are required to be arranged, and ensures that the steps of the flow processing method provided by the invention are smoothly realized.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for processing traffic in a multi-CNI container network, wherein the multi-CNI container network includes a first CNI network for carrying traffic management traffic and a second CNI network for carrying traffic data traffic generated by processing traffic by container groups, each container group included in the multi-CNI container network is provided with a first type network card corresponding to the first CNI network and a second type network card corresponding to the second CNI network, the method comprising the steps of:
the first CNI network transmits the service management flow generated by each container group to a kernel protocol stack at the Host side of the multi-CNI container network through a first type network card, and the kernel protocol stack forwards the service management flow according to a preset forwarding strategy;
the second CNI network unloads the business data flow generated by each container group to DPU equipment connected with the second CNI network through a second type network card, and a flow table for guiding flow forwarding is issued to the DPU equipment; a traffic forwarding plane for forwarding traffic data traffic is pre-deployed on the DPU device;
the DPU device uses a pre-deployed traffic forwarding plane for forwarding traffic data traffic, and forwards traffic data traffic offloaded from a second CNI network according to the flow table from the second CNI network.
2. The method of claim 1, wherein the multi-CNI container network is created by a network orchestration tool for cloud-native container virtualization, wherein the network orchestration tool builds a container virtualization cluster by installing and deploying the network orchestration tool on control nodes of a cloud server cluster and cloud server nodes, and wherein a multi-CNI support component is pre-deployed in the network orchestration tool such that the network orchestration tool supports a plurality of CNIs.
3. The method according to claim 1, wherein an interface detection plug-in is pre-deployed on the cloud proto-container virtualization network orchestration tool, enabling the second CNI network to discover physical function interfaces and/or virtual function interfaces on server nodes, the virtual function interfaces based on single root I/O virtualization being implemented by the interface detection plug-in.
4. The method of claim 1, wherein the cloud proto-container virtualization network orchestration tool is Kubernetes, the multi-CNI support component is Multus-CNI, the method further comprising:
the Calico CNI and ovn-Kubernetes CNI are deployed based on the Kubernetes of installation deployment, the Calico CNI is used for bearing the first CNI network, and the ovn-Kubernetes CNI plug-in is used for bearing the second CNI network;
wherein the Calico CNI is set to the default CNI of Kubernetes.
5. The method according to claim 1, characterized in that the method further comprises:
the ovn-kubernetes CNI component pre-deployed on the control nodes of the cloud server cluster comprises ovs, ovnkube-master, ovnkube-db and ovnkube nodes in a complete mode;
ovn-kubernetes CNI components on the Host side of the worker nodes of the cloud server cluster include OVN-K8s-CNI-Overlay, and ovnkube nodes.
6. The method of claim 1, wherein the first type of network card is a virtual network card interface and the second type of network card is a virtual function interface supporting single root I/O virtualization.
7. The method of claim 1, wherein a traffic forwarding plane for forwarding traffic data traffic is pre-deployed in a system-on-a-chip of the DPU device.
8. The method of claim 7, wherein the method further comprises:
a second CNI network control component for supporting flow table issuing and a second CNI network node configuration component for setting interface configuration are deployed in a system on chip of the DPU device in advance by using an application container engine;
the server and the DPU device are connected to a multi-CNI container network through a switch and/or an optical switch.
9. The method of claim 8, wherein when a plurality of DPU devices are connected to the second CNI network, a data connection is established between different DPU devices through a switch or an optical switch, and a data connection is established between different container groups within the same DPU device.
10. A system for DPU offloading data network plane traffic in a multi-CNI container network, comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, which when executed by the processor, implements the steps of the method of any one of claims 1 to 9.
CN202311435191.XA 2023-10-31 2023-10-31 Method and system for processing service traffic in multi-CNI container network Pending CN117459468A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311435191.XA CN117459468A (en) 2023-10-31 2023-10-31 Method and system for processing service traffic in multi-CNI container network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311435191.XA CN117459468A (en) 2023-10-31 2023-10-31 Method and system for processing service traffic in multi-CNI container network

Publications (1)

Publication Number Publication Date
CN117459468A true CN117459468A (en) 2024-01-26

Family

ID=89590650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311435191.XA Pending CN117459468A (en) 2023-10-31 2023-10-31 Method and system for processing service traffic in multi-CNI container network

Country Status (1)

Country Link
CN (1) CN117459468A (en)

Similar Documents

Publication Publication Date Title
US11190375B2 (en) Data packet processing method, host, and system
US9031081B2 (en) Method and system for switching in a virtualized platform
CA2991359C (en) Packet processing method in cloud computing system, host, and system
JP6513835B2 (en) Packet processing method, host, and system in cloud computing system
CN110113441B (en) Computer equipment, system and method for realizing load balance
US11831551B2 (en) Cloud computing data center system, gateway, server, and packet processing method
JP6805116B2 (en) A server system that can operate when the PSU's standby power supply does not work
CN107005471B (en) Universal customer premises equipment
US9354905B2 (en) Migration of port profile associated with a target virtual machine to be migrated in blade servers
US20100287262A1 (en) Method and system for guaranteed end-to-end data flows in a local networking domain
CN103581324A (en) Cloud computing resource pool system and implement method thereof
US20230308398A1 (en) Latency-aware load balancer for topology-shifting software defined networks
EP4199457A1 (en) Packet drop monitoring in a virtual router
CN113127144B (en) Processing method, processing device and storage medium
EP4184323A1 (en) Performance tuning in a network system
CN108886476B (en) Multiple provider framework for virtual switch data plane and data plane migration
CN117459468A (en) Method and system for processing service traffic in multi-CNI container network
US20230385697A1 (en) Self-learning green networks
US20230342275A1 (en) Self-learning green application workloads
US11362895B2 (en) Automatic configuration of an extended service appliance for network routers
CN116915701B (en) Data processor-based route acceleration method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination