WO2022128068A1 - Technique for implementing packet processing in a cloud computing environment - Google Patents

Technique for implementing packet processing in a cloud computing environment Download PDF

Info

Publication number
WO2022128068A1
WO2022128068A1 PCT/EP2020/086141 EP2020086141W WO2022128068A1 WO 2022128068 A1 WO2022128068 A1 WO 2022128068A1 EP 2020086141 W EP2020086141 W EP 2020086141W WO 2022128068 A1 WO2022128068 A1 WO 2022128068A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet processing
virtual switch
pipeline
cloud
computing environment
Prior art date
Application number
PCT/EP2020/086141
Other languages
French (fr)
Inventor
Jan Scheurich
Stefan Behrens
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2020/086141 priority Critical patent/WO2022128068A1/en
Publication of WO2022128068A1 publication Critical patent/WO2022128068A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/306Route determination based on the nature of the carried application

Definitions

  • the cloud native packet processing function may be one of a plurality of cloud native packet processing functions which are part of a service mesh of the cloud computing environment, wherein the packet processing pipeline may be implemented as an aggregate pipeline reflecting a topology of the service mesh.
  • the service mesh may comprise the plurality of cloud native packet processing functions and, optionally, one or more cloud native application functions available in the cloud computing environment.
  • the aggregate pipeline may be formed by linking the plurality of individual pipeline portions in accordance with the topology of the service mesh. Forming the aggregate pipeline may include consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions into common packet processing instructions of the packet processing pipeline.
  • packet processing instructions reflected in the packet processing pipeline are to be translated in the virtual switch into packet processing operations suitable for offloading into a physical network interface for direct processing of packets by the physical network interface.
  • the packet processing operations suitable for offloading into the physical network interface may comprise one or more flow cache entries.
  • a computing unit configured to execute a virtual switch for implementing packet processing in a cloud computing environment.
  • the computing unit comprises at least one processor and at least one memory, wherein the at least one memory contains instructions executable by the at least one processor such that the virtual switch is operable to perform any of the method steps presented herein with respect to the second aspect.
  • the virtual switch configurator 410 may further receive application specific packet processing instructions from one or more application network functions 402 and consider such instructions in the generation of the common set of rules as well.
  • the resulting flow rules may then be written into the virtual switch 408 to generate an aggregate flow pipeline 414.
  • sandboxes may be used for the individual pipeline portions 412, i.e., one sandbox per packet processing function 404 in the control plane.
  • corresponding flow cache entries e.g., in the form of a forwarding table
  • a hardware accelerated forwarding engine such as the SmartNIC 416.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A technique for implementing packet processing in a cloud computing environment is disclosed. A method implementation of the technique is performed by a virtual switch configurator and comprises receiving (S302), from each of a plurality of cloud native packet processing functions, a set of packet processing instructions indicating one or more rules for processing of packets by a virtual switch of the cloud computing environment, and configuring (S304) the virtual switch based on the received sets of packet processing instructions to implement a packet processing pipeline in the virtual switch reflecting the received sets of packet processing instructions.

Description

Technique for implementing packet processing in a cloud computing environment
Technical Field
The present disclosure generally relates to cloud computing. In particular, a technique for implementing packet processing in a cloud computing environment is presented. The technique may be embodied in methods, computer programs, apparatuses and systems.
Background
In recent years, cloud native computing has evolved as a conceptual approach to develop and operate applications in a cloud. Cloud native applications are designed to be specifically operated in a cloud computing environment rather than in a dedicated on-premise execution environment. According to the definition of the Cloud Native Computing Foundation (CNCF), cloud native computing utilizes cloud computing to build and run scalable applications in modern, dynamic environments, such as public, private and hybrid clouds. Common elements of this architectural approach include technologies, such as containers, microservices, service meshs, immutable infrastructure and declarative Application Programming Interfaces (APIs).
Cloud native applications are typically built as a set of microservices that run in (e.g., Docker) containers, following the concept of many small, modular units of execution (i.e., the microservices) each designed to carry out small tasks and decoupled from each other, which are then bundled into containers to realize more complex application behaviors. A container runs in a virtualized environment which isolates the contained applications from their environment. Containers may be orchestrated by a container orchestration system, such as Kubernetes, for example.
Cloud native network applications are often based on packet processing functions (e.g., implemented as microservices) connected in the form of a network service mesh. A service mesh is an architecture which provides fast and reliable communication between microservices and which typically comprises networking functions for application identification, load balancing, authentication and encryption, for example. In service meshs, network requests between microservices are routed via proxies (also called "sidecars") which together form a mesh network connecting the individual microservices. Like other software defined networking architectures, a service mesh may be separated in a control plane and a data plane. A so-called Network Service Mesh (NSM) is a specific form of a service mesh which maps the concept of service mesh to L2/L3 payloads to solve use cases in Kubernetes which are difficult to address with the original Kubernetes network model.
While the modularity in modern cloud native application environments generally simplifies application lifecycle management and provides flexibility at runtime to modify and scale functionality, the distributed modular nature of such applications may have a severe impact on the processing and forwarding efficiency of packets. In the case of network applications in particular, the forwarding performance may be negatively impacted by the number of transitions a packet has to go through between the (operating system) kernel space and the user space, network namespaces, hosts or virtual machines (VMs), for example. Also, the encapsulation/de-encapsulation of packets with overlay networks and the forwarding of the packets between multiple entities may reduce efficiency of a network service in terms of its computational processing and memory resource usage.
Figure 1 exemplarily illustrates an overview of a conventional network service mesh in a cloud native container environment. As shown in the figure, packets originating from an application network function 102 may travel across multiple packet processing functions 104-1 and 104-2 (e.g., implemented as microservices) according to some defined service mesh topology. To do so, the packets may need to transit between multiple network namespaces and, in case of kernel forwarding, transit between the kernel space and the user space inside of each computing unit. Packets may also need to be sent between computing units, implying encapsulation/de-encapsulation with overlay tunnel headers and forwarding the packets from the computing units through a network fabric to other computing units.
All this may incur significant overhead in terms of Central Processing Unit (CPU) and memory usage and may limit the maximum achievable throughput. In some variants, it is attempted to alleviate such negative impacts by the employment of Data Plane Development Kit (DPDK) based forwarding entities, for example. DPDK provides data plane libraries which allow offloading Transmission Control Protocol (TCP)/Internet Protocol (IP) stack packet processing from the operating system kernel to processes running in the user space. While this may increase throughput, the overhead incurred by encapsulating and transferring packets between the computing units still persists. Summary
Accordingly, there is a need for a technique which enables increased network throughput for segmented and modular network applications in a cloud computing environment, preferably avoiding one or more of the problems discussed above.
According to a first aspect, a method for implementing packet processing in a cloud computing environment is provided. The method is performed by a virtual switch configurator and comprises receiving, from each of a plurality of cloud native packet processing functions, a set of packet processing instructions indicating one or more rules for processing of packets by a virtual switch of the cloud computing environment. The method further comprises configuring the virtual switch based on the received sets of packet processing instructions to implement a packet processing pipeline in the virtual switch reflecting the received sets of packet processing instructions.
The one or more rules may correspond to flow rules and the packet processing pipeline may correspond to a flow pipeline, optionally complying with an OpenFlow protocol. The plurality of cloud native packet processing functions may be distributed across multiple hosts in the cloud computing environment. The packet processing pipeline may comprise a plurality of individual pipeline portions each representative of one of the sets of packet processing instructions.
The cloud native packet processing functions may be part of a service mesh of the cloud computing environment, wherein the packet processing pipeline may be implemented as an aggregate pipeline reflecting a topology of the service mesh. The service mesh may comprise the plurality of cloud native packet processing functions and, optionally, one or more cloud native application functions available in the cloud computing environment. The aggregate pipeline may be formed by linking the plurality of individual pipeline portions in accordance with the topology of the service mesh. Forming the aggregate pipeline may include consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions into common packet processing instructions of the packet processing pipeline.
Each of the plurality of individual pipeline portions may be implemented in a sandbox in the virtual switch to segregate execution of each of the sets of packet processing instructions. For each sandbox, modifications to the set of packet processing instructions handled in the sandbox may be restricted to be applied by the cloud native packet processing function which provided the set of packet processing instructions.
The method may further comprise translating port names received from the plurality of cloud native packet processing functions into port names available at the virtual switch. In some variants, packet processing instructions reflected in the packet processing pipeline are to be translated in the virtual switch into packet processing operations suitable for offloading into a physical network interface for direct processing of packets by the physical network interface. The packet processing operations suitable for offloading into the physical network interface may comprise one or more flow cache entries.
By default, packets communicated through the cloud computing environment in flows may be forwarded to the plurality of cloud native packet processing functions to be processed by the plurality of cloud native packet processing functions. The received sets of packet processing instructions may relate to selected ones of the flows and configuring the virtual switch based on the received sets of packet processing instructions may overrule the default to process packets of the selected ones of the flows in the virtual switch without forwarding them to one of the plurality of cloud native packet processing functions. Overruling the default may be performed at runtime of the virtual switch.
According to a second aspect, a method for implementing packet processing in a cloud computing environment is provided. The method is performed by a virtual switch of the cloud computing environment and comprises applying a configuration to implement a packet processing pipeline in the virtual switch reflecting a plurality of sets of packet processing instructions each indicating one or more rules for processing of packets by the virtual switch. Each of the sets of packet processing instructions originates from one of a plurality of cloud native packet processing functions and is provided to the virtual switch from a virtual switch configurator.
The method according to the second aspect defines a method from the perspective of a virtual switch which may be complementary to the method performed by the virtual switch configurator according to the first aspect. The virtual switch and the virtual switch configurator of the second aspect may thus correspond to the virtual switch and the virtual switch configurator described above in relation to the first aspect.
As in the method of the first aspect, the one or more rules may correspond to flow rules and the packet processing pipeline may correspond to a flow pipeline, optionally complying with an OpenFlow protocol. The plurality of cloud native packet processing functions may be distributed across multiple hosts in the cloud computing environment. The packet processing pipeline may comprise a plurality of individual pipeline portions each representative of one of the sets of packet processing instructions.
The cloud native packet processing functions may be part of a service mesh of the cloud computing environment, wherein the packet processing pipeline may be implemented as an aggregate pipeline reflecting a topology of the service mesh. The service mesh may comprise the plurality of cloud native packet processing functions and, optionally, one or more cloud native application functions available in the cloud computing environment. The aggregate pipeline may be formed by linking the plurality of individual pipeline portions in accordance with the topology of the service mesh. Forming the aggregate pipeline may include consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions into common packet processing instructions of the packet processing pipeline.
Each of the plurality of individual pipeline portions may be implemented in a sandbox in the virtual switch to segregate execution of each of the sets of packet processing instructions. For each sandbox, modifications to the set of packet processing instructions handled in the sandbox may be restricted to be originated by the cloud native packet processing function which provided the set of packet processing instructions.
The method may further comprise translating packet processing instructions reflected in the packet processing pipeline into packet processing operations suitable for offloading into a physical network interface for direct processing of packets by the physical network interface. The method may further comprise offloading the translated packet processing instructions into the physical network interface. The packet processing operations suitable for offloading into the physical network interface may comprise one or more flow cache entries. By default, packets communicated through the cloud computing environment in flows may be forwarded to the plurality of cloud native packet processing functions to be processed by the plurality of cloud native packet processing functions. The sets of packet processing instructions may relate to selected ones of the flows and applying the configuration to reflect the sets of packet processing instructions may overrule the default to process packets of the selected ones of the flows in the virtual switch without forwarding them to one of the plurality of cloud native packet processing functions. Overruling the default may be performed at runtime of the virtual switch.
According to a third aspect, a method for implementing packet processing in a cloud computing environment is provided. The method is performed by a cloud native packet processing function and comprises sending, to a virtual switch configurator, a set of packet processing instructions indicating one or more rules for processing of packets by a virtual switch of the cloud computing environment. The virtual switch configurator is to configure the virtual switch based on the set of packet processing instructions to implement a packet processing pipeline in the virtual switch reflecting the set of packet processing instructions.
The method according to the third aspect defines a method from the perspective of a cloud native packet processing function which may be complementary to the method performed by the virtual switch configurator according to the first aspect. The virtual switch configurator and the cloud native processing function of the third aspect may thus correspond to the virtual switch configurator and one of the plurality of cloud native packet processing functions described above in relation to the first aspect.
As in the method of the first aspect, the one or more rules may correspond to flow rules and the packet processing pipeline corresponds to a flow pipeline, optionally complying with an OpenFlow protocol. The plurality of cloud native packet processing functions may be distributed across multiple hosts in the cloud computing environment. The packet processing pipeline may comprise a plurality of individual pipeline portions each representative of one of the sets of packet processing instructions.
The cloud native packet processing function may be one of a plurality of cloud native packet processing functions which are part of a service mesh of the cloud computing environment, wherein the packet processing pipeline may be implemented as an aggregate pipeline reflecting a topology of the service mesh. The service mesh may comprise the plurality of cloud native packet processing functions and, optionally, one or more cloud native application functions available in the cloud computing environment. The aggregate pipeline may be formed by linking the plurality of individual pipeline portions in accordance with the topology of the service mesh. Forming the aggregate pipeline may include consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions into common packet processing instructions of the packet processing pipeline.
In some variants, each of the plurality of individual pipeline portions is to be implemented in a sandbox in the virtual switch to segregate execution of each of the sets of packet processing instructions. For each sandbox, modifications to the set of packet processing instructions handled in the sandbox may be restricted to be applied by the cloud native packet processing function which provided the set of packet processing instructions.
In some variants, packet processing instructions reflected in the packet processing pipeline are to be translated in the virtual switch into packet processing operations suitable for offloading into a physical network interface for direct processing of packets by the physical network interface. The packet processing operations suitable for offloading into the physical network interface may comprise one or more flow cache entries.
By default, packets communicated through the cloud computing environment in flows may be forwarded to the plurality of cloud native packet processing functions to be processed by the plurality of cloud native packet processing functions. The set of packet processing instructions may relate to a selected one of the flows and configuring the virtual switch based on the set of packet processing instructions may overrule the default to process packets of the selected one of the flows in the virtual switch without forwarding them to the cloud native packet processing function. Overruling the default may be performed at runtime of the virtual switch.
According to a fourth aspect, a computer program product is provided. The computer program product comprises program code portions for performing the method of at least one of the first, the second and the third aspect when the computer program product is executed on one or more computing devices (e.g., a processor or a distributed set of processors). The computer program product may be stored on a computer readable recording medium, such as a semiconductor memory, DVD, CD- ROM, and so on. According to a fifth aspect, a computing unit configured to execute a virtual switch configurator for implementing packet processing in a cloud computing environment is provided. The computing unit comprises at least one processor and at least one memory, wherein the at least one memory contains instructions executable by the at least one processor such that the virtual switch configurator is operable to perform any of the method steps presented herein with respect to the first aspect.
According to a sixth aspect, a computing unit configured to execute a virtual switch for implementing packet processing in a cloud computing environment is provided. The computing unit comprises at least one processor and at least one memory, wherein the at least one memory contains instructions executable by the at least one processor such that the virtual switch is operable to perform any of the method steps presented herein with respect to the second aspect.
According to a seventh aspect, a computing unit configured to execute a cloud native packet processing function for implementing packet processing in a cloud computing environment is provided. The computing unit comprises at least one processor and at least one memory, wherein the at least one memory contains instructions executable by the at least one processor such that the cloud native packet processing function is operable to perform any of the method steps presented herein with respect to the third aspect.
According to an eighth aspect, there is provided a system comprising a computing unit according to the fifth aspect and at least one of a computing unit according to the sixth aspect and a computing unit according to the seventh aspect.
Brief Description of the Drawings
Implementations of the technique presented herein are described herein below with reference to the accompanying drawings, in which:
Fig. 1 illustrates an overview of an exemplary conventional network service mesh comprising cloud native packet processing functions in a cloud native container environment;
Figs. 2a to 2c illustrate exemplary compositions of a computing unit configured to execute a virtual switch configurator, a computing unit configured to execute a virtual switch, and a computing unit configured to execute a cloud native packet processing function according to the present disclosure;
Fig. 3 illustrates a method which may be performed by the virtual switch configurator according to the present disclosure;
Fig. 4 illustrates an overview of an exemplary network service mesh comprising a plurality of cloud native packet processing functions according to the present disclosure;
Fig. 5 illustrates an architecture according to the present disclosure in more general form;
Fig. 6 illustrates an exemplary network service mesh according to the present disclosure in which "selective flow offloading" is used;
Fig. 7 illustrates a method which may be performed by the virtual switch according to the present disclosure; and
Fig. 8 illustrates a method which may be performed by a cloud native packet processing function according to the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details.
Those skilled in the art will further appreciate that the steps, services and functions explained herein below may be implemented using individual hardware circuitry, using software functioning in conjunction with a programmed micro-processor or general purpose computer, using one or more Application Specific Integrated Circuits (ASICs) and/or using one or more Digital Signal Processors (DSPs) or Data Processing Units (DPUs). It will also be appreciated that when the present disclosure is described in terms of a method, it may also be embodied in one or more processors and one or more memories coupled to the one or more processors, wherein the one or more memories are encoded with one or more programs that perform the steps, services and functions disclosed herein when executed by the one or more processors.
Figure 1 illustrates an overview of an exemplary conventional network service mesh comprising a plurality of cloud native packet processing functions. In the shown example, packets originating from an application network function 102 may travel across multiple packet processing functions, exemplified by packet processing functions 104-1 and 104-2 in the figure (arranged subsequently, thus forming a very simple network service mesh topology), before they may be conveyed to an external network through which they may reach a destination, such an external gateway 106, for example. In the shown example, kernel forwarding may be applied so that packets may transit between the kernel space and the user space at each of the application network function 102 and the packet processing functions 104-1 and 104- 2. Between the functions 102, 104-1 and 104-2, the packets may be conveyed through a cloud-internal network 108 (e.g., orchestrated by a container orchestration system, such as Kubernetes) once they have passed respective interfaces (denoted "int" in the figure) of the computing units on which the functions 102, 104-1 and 104-2 are executed. As exemplarily shown in the figure, an interface of each container may be implemented by a standard primary container network solution, such as Calico, for example.
In such a conventional scenario, packets communicated through the cloud internal network 108 may be forwarded to each of the cloud native packet processing functions to be processed therein, which may cause that packets transit between multiple network namespaces and between the kernel and user spaces inside of each of the computing units. Also, encapsulation/de-encapsulation with overlay tunnel headers enabling the transmission of the packets between the computing units and the external network may be required. This may incur significant overhead in terms of processor and memory usage, thereby limiting the maximum achievable network throughput. These issues may be addressed by the principles of the technique presented herein, as will be described in detail below.
Figure 2a schematically illustrates an exemplary composition of a computing unit 200 configured to execute a virtual switch configurator for implementing packet processing in a cloud computing environment. The computing unit 200 comprises at least one processor 202 and at least one memory 204, wherein the at least one memory 204 contains instructions executable by the at least one processor 202 such that the virtual switch configurator is operable to carry out the method steps described herein below with reference to the virtual switch configurator.
Figure 2b schematically illustrates an exemplary composition of a computing unit 210 configured to execute a virtual switch for implementing packet processing in a cloud computing environment. The computing unit 210 comprises at least one processor 212 and at least one memory 214, wherein the at least one memory 214 contains instructions executable by the at least one processor 212 such that the virtual switch is operable to carry out the method steps described herein below with reference to the virtual switch.
Figure 2c schematically illustrates an exemplary composition of a computing unit 220 configured to execute a cloud native packet processing function for implementing packet processing in a cloud computing environment. The computing unit 220 comprises at least one processor 222 and at least one memory 224, wherein the at least one memory 224 contains instructions executable by the at least one processor 222 such that the cloud native packet processing function is operable to carry out the method steps described herein below with reference to a cloud native packet processing function.
It will be understood that each of the computing units 200, 210 and 220 may be implemented in the form of a component executed on one or more distributed computing units in the cloud computing environment. Each of the computing units 200, 210 and 220 may be implemented on a physical or virtualized computing unit, such as a virtual machine, for example.
Figure 3 illustrates a method which may be performed by the virtual switch configurator executed on the computing unit 200 according to the present disclosure. The method is dedicated to implementing packet processing in a cloud computing environment. In step S302, the virtual switch configurator may receive, from each of a plurality of cloud native packet processing functions, a set of packet processing instructions indicating one or more rules for processing of packets by a virtual switch of the cloud computing environment. In step S304, the virtual switch configurator may configure the virtual switch based on the received sets of packet processing instructions to implement a packet processing pipeline in the virtual switch reflecting the received sets of packet processing instructions. Thus, according to the technique presented herein, a packet processing pipeline may be implemented in a virtual switch and packets communicated through the cloud computing environment (e.g., in the cloud-internal network) may be processed in the virtual switch in compliance with packet processing instructions prescribed by the cloud native packet processing functions. Rather than forwarding the packets to the cloud native packet processing functions to have them processed therein, like in conventional systems as described above, according to the technique presented herein, packets may not be forwarded to the cloud native packet processing functions, but may be processed in the data plane (i.e., in the virtual switch), without having to go through the multiple kernel and user space transits and encapsulation/de-encapsulation processes that generally limit the throughput of conventional systems, as described above. Increased network throughput may therefore be achieved.
Packet processing logic may in other words be shifted to the data plane, whereas the cloud native packet processing functions (while still being modular) may be reduced to (e.g., mere) control plane entities. The cloud computing environment may as such be separated into a control plane and a data plane, wherein control plane entities (represented by the cloud native packet processing functions) may make decisions on how packets are to be processed (e.g., where network traffic is to be sent) and instruct (or "configure") the underlying network components, i.e., the data plane entities which actually process the packets (represented by the virtual switch, e.g., which forwards the network traffic to selected destinations), to process the packets accordingly. To increase network throughput, the technique presented herein may also be said to "offload" packet processing operations from the cloud native packet processing functions to the performance-optimized data plane.
The virtual switch configurator may be an intermediate component (or "entity") between the cloud native processing functions and the virtual switch of the cloud computing environment. As said, the virtual switch configurator may receive (or "obtain"), from each of the plurality of cloud native packet processing functions, a set of packet processing instructions and use such set of processing instructions to configure the virtual switch accordingly, i.e., to implement the packet processing pipeline in the virtual switch. Each set of packet processing instructions may indicate one or more rules for the processing of packets, such as forwarding rules between the network interfaces that a cloud native packet processing function owns, for example. To this end, the virtual switch configurator may provide an API through which each of the cloud native packet processing functions may program, via the virtual switch configurator, its packet processing logic into the virtual switch by way of corresponding packet processing rules (e.g., forwarding rules). The virtual switch configurator may in other words be capable of processing packet processing instructions (e.g., forwarding intents) from a multitude of packet processing functions, and may then write corresponding instructions into the virtual switch.
Packets may be communicated through the cloud computing environment in flows and, therefore, in some variants, the one or more rules indicated by a set of packet processing instructions may correspond to flow rules and the packet processing pipeline may correspond to a flow pipeline, optionally complying with an OpenFlow protocol. OpenFlow protocols may generally be used by control plane entities to program the behavior of data plane entities, such as virtual switches, by installing flow rules (also called "flow entries") in the switches. A flow rule may comprise a set of match fields which are applied to packets arriving at the switch. If a packet matches the match fields, a set of associated actions may be executed on the packet. Flow rules may be organized in flow tables and a sequence of flow tables may form an OpenFlow pipeline, wherein matching starts at the first flow table and may continue at subsequent tables of the pipeline. Each cloud native packet processing function may thus program, via the virtual switch configurator, its part of the packet processing logic through an OpenFlow-based API into the virtual switch. The OpenFlow protocol is defined, for example, in the OpenFlow Switch Specification, version 1.5.1 (protocol version 0x06) of the Open Networking Foundation (ONF).
The cloud computing environment may be organized in the form of a service mesh (e.g., a Kubernetes-orchestrated NSM). Like in the example of Figure 1, packets to be processed in the packet processing pipeline may originate from one or more cloud native application network functions and may then undergo processing in the packet processing pipeline of the virtual switch before the packets are forwarded to a destination, such as to an external gateway via an external network, for example. The service mesh may as such not only comprise the packet processing functions, but also one or more application network functions that implement application logic. The cloud computing environment may be a cloud native computing environment which may be implemented under the provisions of (e.g., using technologies provided by) the CNCF. At least some of the cloud native packet processing functions may either be co-located on the same host, or they may be distributed across a cluster of hosts. In one variant, the plurality of cloud native processing functions may thus be distributed across multiple hosts in the cloud computing environment. The cloud native processing functions may be implemented as containers in a cloud native environment or, more specifically, they may be implemented as microservices that may, optionally, be run in (e.g., Docker) containers, for example.
Each set of packet processing instructions from the plurality of cloud native packet processing functions may be transformed into an individual pipeline portion and the packet processing pipeline implemented in the virtual switch may be assembled (or "combined") from the individual pipeline portions. The packet processing pipeline may thus comprise a plurality of individual pipeline portions each representative of one of the sets of packet processing functions. Based on the service mesh topology, additional logic may link together the individual pipeline portions of the various packet processing functions to form an aggregate pipeline comprising the entire service mesh processing logic. The cloud native packet processing functions may thus be part of a service mesh of the cloud computing environment, wherein the packet processing pipeline may be implemented as an aggregate pipeline reflecting a topology of the service mesh. As said, the service mesh may comprise the plurality of cloud native packet processing functions and, optionally, one or more cloud native application functions available in the cloud computing environment.
In one variant, the aggregate pipeline may be formed by linking the plurality of individual pipeline portions in accordance with the topology of the service mesh. When aggregating the pipeline, the pipeline may be optimized (or "simplified") by consolidating potentially redundant or contradictory packet processing instructions received from the various packet processing functions. The virtual switch configurator may in other words collect packet processing instructions (e.g., forwarding requests) from a multitude of control plane entities, each of which may represent a different packet processing function. Consolidating the packet processing instructions may include removing redundant or contradictory packet processing instructions from the entirety of packet processing instructions received from the cloud native packet processing functions as well as combining the remaining packet processing functions into one common set of packet processing functions. Forming the aggregate pipeline may in other words include consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions into common packet processing instructions of the packet processing pipeline. The final set of packet processing instructions after consolidation (or "simplification") may then be programmed into the virtual switch (e.g., as OpenFlow rules). In order to separate application flows from infrastructure flows, the virtual switch configurator may implement sandboxes to contain and restrict packet processing instructions originating from individual control plane functions, i.e., individual cloud native packet processing functions. When the packet processing pipeline comprises a plurality of individual pipeline portions each representative of one of the sets of packet processing instructions, as described above, each of the plurality of individual pipeline portions may thus be implemented in a sandbox in the virtual switch to segregate execution of each of the sets of packet processing instructions. As known to one of skill in the art, a sandbox may be a security mechanism for separating running programs, usually to prevent system failures and/or software vulnerabilities from spreading. A sandbox may be implemented by a restricted operating system environment with a tightly controlled set of resources for programs to run on it, to thereby avoid risking harm to the host machine or operating system. For each sandbox, modifications to the set of packet processing instructions handled in the sandbox may be restricted to be applied by the cloud native packet processing function which (e.g., initially) provided the set of packet processing instructions. By thereby allowing packet processing functions to manipulate flows only in their respective sandbox, the infrastructure control plane may remain in control of the overall integrity and security of the data plane.
As said, the virtual switch configurator may be an intermediate component between the plurality of cloud native packet processing functions (residing in the control plane) and the virtual switch (residing in the data plane). The virtual switch configurator may as such need to translate between input and output port names as seen by the control plane and translate them into port designations and directions in the data plane. The method performed by the virtual switch configurator may thus further comprise translating port names received from the plurality of cloud native packet processing functions into port names available at the virtual switch.
In the virtual switch, further optimizations may take place. During packet processing, the virtual switch may translate the packet processing (e.g., OpenFlow) pipeline into a set of operations suitable for offloading into packet processing hardware, such as into simple flow cache entries for efficient processing and forwarding of packets in a given datapath. Such flow cache entries may then be offloaded to suitable physical network interfaces, such as Network Interface Cards (NICs) that allow direct processing of packets by the NIC hardware, like SmartNICs, for example. A SmartNIC may be a NIC which is capable of handling offloaded processing tasks that a system CPU would normally handle. Typically, a SmartNIC may comprise its own on-board processor by which it may be able to perform encryption/decryption, firewall, TCP/IP and/or Hypertext Transfer Protocol (HTTP) processing, or the like. In some variants, packet processing instructions reflected in the packet processing pipeline are thus to be translated in the virtual switch into packet processing operations suitable for offloading into at least one physical network interface for direct processing of packets by the physical network interface. The packet processing operations suitable for offloading into the physical network interface may comprise one or more flow cache entries. As a result of such processing, packets may be directly exchanged between the application network function and the physical network interfaces, while the physical network interfaces may execute the joint set of packet processing instructions prescribed by the plurality of cloud native packet processing functions.
Figure 4 illustrates an overview of an exemplary network service mesh comprising a plurality of cloud native packet processing functions according to the present disclosure. Like the network service mesh of Figure 1, the service mesh of Figure 4 comprises an application network function 402 from which packets may originate and travel across multiple packet processing functions, exemplified by packet processing functions 404-1 and 404-2 in the figure (again arranged subsequently, thus forming a very simple network service mesh topology), before they may be conveyed to an external network through which they may reach a destination, such as in external gateway 406, for example. Contrary to the conventional system of Figure 1, however, packets communicated through the cloud computing environment may not be forwarded to the cloud native packet processing functions 404-1 and 404-2 to be processed therein, but may rather be processed on data plane level at the virtual switch 408, as described above.
A virtual switch configurator 410 may thus be provided as an intermediate entity between the virtual switch 408 and the cloud native packet processing functions 404- 1 and 404-2 which, in the shown example, provides an OpenFlow API towards the cloud native packet processing functions 404-1 and 404-2, allowing the cloud native packet processing functions 404-1 and 404-2 to program, via the switch configurator 410, the sets of packet processing instructions into the virtual switch 408, as described above. As a mere example, the cloud native packet processing function 404-1 may be given by a Virtual Private Network (VPN) gateway/load-balancing ("VPN Gw/LB") controller and the cloud native packet processing function 404-2 may be given by a Fast Ethernet cross connect ("FE XC") controller. For each of these functions, an individual OpenFlow pipeline 412-1 and 412-2 may be created and programmed into the virtual switch 408 by the virtual switch configurator 410, wherein the OpenFlow pipelines 412-1 and 412-2 may correspond to individual pipeline portions of the overall packet processing pipeline 414 eventually installed in the virtual switch 408. As a mere example, the virtual switch 408 may be an Open vSwitch based switch.
To reduce the complexity of the OpenFlow pipeline and offload corresponding packet processing operations, the virtual switch 408 may translate the complex OpenFlow pipeline into simple flow cache entries and offload these flow cache entries to a SmartNIC 416, where, in the shown example, a switchdev driver model is exemplarily indicated as mechanism that controls the physical functions ("PF") and virtual functions ("VF") of the SmartNIC 416 accordingly.
Figure 5 illustrates the suggested architecture in more general form. As shown, the virtual switch configurator 410 may receive, from each of a plurality of packet processing functions 404, a set of packet processing instructions reflecting its packet processing intent on how to treat packets. The virtual switch configurator 410 may further receive, from a mesh control function 502 of the cloud computing environment, processing instructions (or "rules") reflecting the service mesh topology. As an example, the service mesh topology may be given as a forwarding graph connecting multiple application network functions and packet processing functions. In the virtual switch configurator 410, these packet processing instructions and mesh topology rules may be combined/consolidated into one common set of rules. Optionally, the virtual switch configurator 410 may further receive application specific packet processing instructions from one or more application network functions 402 and consider such instructions in the generation of the common set of rules as well. The resulting flow rules may then be written into the virtual switch 408 to generate an aggregate flow pipeline 414. As indicated in the figure, sandboxes may be used for the individual pipeline portions 412, i.e., one sandbox per packet processing function 404 in the control plane. For direct processing of packets in forwarding hardware, corresponding flow cache entries (e.g., in the form of a forwarding table) may be offloaded to a hardware accelerated forwarding engine, such as the SmartNIC 416.
In accordance with the above description, packets communicated through the cloud computing environment may not be forwarded to the plurality of cloud native packet processing functions, i.e., contrary to conventional systems in which packets are by default forwarded to the packet processing functions to be processed therein, as described above. In one variant of the technique presented herein, a combination of these two approaches may be envisaged. In such variant, the cloud native packet processing functions may "offload" only a subset of packet processing operations to the performance-optimized data plane. In other words, by default, all packets may be sent to the packet processing functions to be processed therein, wherein each packet processing function may selectively install rules via the virtual switch configurator to shift at least some selected flows to the performance-optimization data plane. The virtual switch configurator may thus provide an interface through which the packet processing functions may instruct the virtual switch configurator to not send packets to it, but rather process them in the data plane. By default, packets communicated through the cloud computing environment in flows may thus be forwarded to the plurality of cloud native packet processing functions to be processed by the plurality of cloud native packet processing functions, wherein the received sets of packet processing functions may relate to (e.g., only) selected ones of the flows, and wherein configuring the virtual switch based on the received sets of packet processing instructions may overrule the default to process packets of the selected ones of the flows in the virtual switch without forwarding them to one of the plurality of cloud native packet processing functions.
Such type of "selective flow offloading" may be advantageous in that (1) existing packet processing functions that implement the data plane in software may be reused, (2) complex but low bandwidth processing may remain to be executed by software running in containers, (3) costly high-bandwidth ("elephant") flows with high throughput requirements but low processing complexity may be handled in the performance-optimized data plane, optionally accelerated in hardware, (4) functionality may be shifted between the two approaches gradually as needed, and (5) flows may be shifted "on demand", e.g., based on dynamic decisions taken at runtime, such as based on actual network load, for example. Overruling the default may thus be performed at runtime of the virtual switch.
Figure 6 illustrates an exemplary network service mesh in which "selective flow offloading" is used. The shown example corresponds to the example of Figure 4, focusing only on the cloud native packet processing function 404-1 for ease of illustration, wherein the only difference is that not all, but only some (selected) flows are "offloaded" to the performance-optimized data plane, so that the virtual switch 408 comprises only parts 412a of the VPN Gw/LB OpenFlow pipeline 412-1, whereas other flows (denoted by reference 602 and the figure) may be forwarded to the cloud native packet processing function 404-1 to be processed therein, i.e., in accordance with the default behavior of conventional systems. Figure 7 illustrates a method which may be performed by the virtual switch executed on the computing unit 210 according to the present disclosure. The method is dedicated to implementing packet processing in a cloud computing environment. The operation of the virtual switch may be complementary to the operation of the virtual switch configurator described above and, as such, aspects described above with regard to the operation of the virtual switch may be applicable to the operation of the virtual switch described in the following as well. Unnecessary repetitions are thus omitted in the following.
In step S702, the virtual switch may apply a configuration to implement a packet processing pipeline in the virtual switch reflecting a plurality of sets of packet processing instructions each indicating one or more rules for processing of packets by the virtual switch, wherein each of the sets of packet processing instructions originates from one of a plurality of cloud native packet processing functions and is provided to the virtual switch from a virtual switch configurator (e.g., the virtual switch configurator executed on the computing unit 200).
As described above in relation to the previous figures, the one or more rules may correspond to flow rules and the packet processing pipeline may correspond to a flow pipeline, optionally complying with an OpenFlow protocol. The plurality of cloud native packet processing functions may be distributed across multiple hosts in the cloud computing environment. The packet processing pipeline may comprise a plurality of individual pipeline portions each representative of one of the sets of packet processing instructions.
The cloud native packet processing functions may be part of a service mesh of the cloud computing environment, wherein the packet processing pipeline may be implemented as an aggregate pipeline reflecting a topology of the service mesh. The service mesh may comprise the plurality of cloud native packet processing functions and, optionally, one or more cloud native application functions available in the cloud computing environment. The aggregate pipeline may be formed by linking the plurality of individual pipeline portions in accordance with the topology of the service mesh. Forming the aggregate pipeline may include consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions into common packet processing instructions of the packet processing pipeline. Each of the plurality of individual pipeline portions may be implemented in a sandbox in the virtual switch to segregate execution of each of the sets of packet processing instructions. For each sandbox, modifications to the set of packet processing instructions handled in the sandbox may be restricted to be originated by the cloud native packet processing function which provided the set of packet processing instructions.
The method performed by the virtual switch may further comprise translating packet processing instructions reflected in the packet processing pipeline into packet processing operations suitable for offloading into a physical network interface for direct processing of packets by the physical network interface. The method may further comprise offloading the translated packet processing instructions into the physical network interface. The packet processing operations suitable for offloading into the physical network interface may comprise one or more flow cache entries.
By default, packets communicated through the cloud computing environment in flows may be forwarded to the plurality of cloud native packet processing functions to be processed by the plurality of cloud native packet processing functions. The sets of packet processing instructions may relate to selected ones of the flows and applying the configuration to reflect the sets of packet processing instructions may overrule the default to process packets of the selected ones of the flows in the virtual switch without forwarding them to one of the plurality of cloud native packet processing functions. Overruling the default may be performed at runtime of the virtual switch.
Figure 8 illustrates a method which may be performed by the cloud native packet processing function executed on the computing unit 220 according to the present disclosure. The method is dedicated to implementing packet processing in a cloud computing environment. The operation of the cloud native packet processing function may be complementary to the operation of the virtual switch configurator described above and, as such, aspects described above with regard to the operation of the cloud native packet processing function may be applicable to the operation of the cloud native packet processing function described in the following as well. Unnecessary repetitions are thus omitted.
In step S802, the cloud native packet processing function may send, to a virtual switch configurator (e.g., the virtual switch configurator executed on the computing unit 200), a set of packet processing instructions indicating one or more rules for processing of packets by a virtual switch (e.g., the virtual switch executed on the computing unit 210) of the cloud computing environment, wherein the virtual switch configurator is to configure the virtual switch based on the set of packet processing instructions to implement a packet processing pipeline in the virtual switch reflecting the set of packet processing instructions.
As described above in relation to the previous figures, the one or more rules may correspond to flow rules and the packet processing pipeline corresponds to a flow pipeline, optionally complying with an OpenFlow protocol. The plurality of cloud native packet processing functions may be distributed across multiple hosts in the cloud computing environment. The packet processing pipeline may comprise a plurality of individual pipeline portions each representative of one of the sets of packet processing instructions.
The cloud native packet processing function may be one of a plurality of cloud native packet processing functions which are part of a service mesh of the cloud computing environment, wherein the packet processing pipeline may be implemented as an aggregate pipeline reflecting a topology of the service mesh. The service mesh may comprise the plurality of cloud native packet processing functions and, optionally, one or more cloud native application functions available in the cloud computing environment. The aggregate pipeline may be formed by linking the plurality of individual pipeline portions in accordance with the topology of the service mesh. Forming the aggregate pipeline may include consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions into common packet processing instructions of the packet processing pipeline.
In some variants, each of the plurality of individual pipeline portions is to be implemented in a sandbox in the virtual switch to segregate execution of each of the sets of packet processing instructions. For each sandbox, modifications to the set of packet processing instructions handled in the sandbox may be restricted to be applied by the cloud native packet processing function which provided the set of packet processing instructions.
In some variants, packet processing instructions reflected in the packet processing pipeline are to be translated in the virtual switch into packet processing operations suitable for offloading into a physical network interface for direct processing of packets by the physical network interface. The packet processing operations suitable for offloading into the physical network interface may comprise one or more flow cache entries. By default, packets communicated through the cloud computing environment in flows may be forwarded to the plurality of cloud native packet processing functions to be processed by the plurality of cloud native packet processing functions. The set of packet processing instructions may relate to a selected one of the flows and configuring the virtual switch based on the set of packet processing instructions may overrule the default to process packets of the selected one of the flows in the virtual switch without forwarding them to the cloud native packet processing function. Overruling the default may be performed at runtime of the virtual switch.
As has become apparent from the above, the present disclosure provides a technique for implementing packet processing in a cloud computing environment. In order to achieve increased network throughput, such as for network services provided in a telecommunication data center (e.g., implementing services of a 5G core network), and to overcome inefficiencies of segmented and modular network applications in cloud native computing environments, the presented technique may attempt to minimize the number of context switches a packet being communicated has to go through. This may be achieved by logically combining the sequence of network processing operations performed by a multitude of distributed packet processing functions into a single, optimized set of operations in the form of a consolidated flow pipeline. A forwarding intent received from multiple control plane entities may thus be combined into a common pipeline of flow entries, enabling enforcement of security and consistency policies, for example.
The common pipeline of flow entries may be centralized, regardless of the physical distribution of control plane entities, wherein the centralized forwarding plane may be prepared for offloading into hardware acceleration units, either in its entirety and for all received packets, or selectively for individual packet streams selectable by the control plane functions. The technique presented herein may avoid sending packets across a multitude of network hops and computing units in a processing cluster by centralizing the implementation of the optimized packet processing pipeline on a smaller number of computing units (or even a single one). Also, the packet processing pipeline may be transformed such that it may become suitable for offloading the packet processing operations into specialized forwarding hardware, such as SmartNICs. This may allow processing packets in a more efficient way compared to doing the same in general purpose processing hardware. The technique presented herein may enable a modular control plane split into a multitude of execution units that may be distributed over different computing nodes in a data center. A centralized data plane, on the other hand, may receive and consolidate forwarding intent articulated by a multitude of either application-specific or infrastructure-specific control plane entities. The presented technique may as such maintain modularity, scalability and ease of life-cycle management of cloud native applications, but at the same time enable high throughput network applications with efficient resource usage in cloud native environments. The technique may be compatible with mainstream container orchestration systems, like Kubernetes, as well as with emerging trends in container networking, such as NSM.
It is believed that the advantages of the technique presented herein will be fully understood from the foregoing description, and it will be apparent that various changes may be made in the form, constructions and arrangement of the exemplary aspects thereof without departing from the scope of the invention or without sacrificing all of its advantageous effects. Because the technique presented herein can be varied in many ways, it will be recognized that the invention should be limited only by the scope of the claims that follow.

Claims

Claims
1. A method for implementing packet processing in a cloud computing environment, the method being performed by a virtual switch configurator (410) and comprising: receiving (S302), from each of a plurality of cloud native packet processing functions (404), a set of packet processing instructions indicating one or more rules for processing of packets by a virtual switch (408) of the cloud computing environment; and configuring (S304) the virtual switch (408) based on the received sets of packet processing instructions to implement a packet processing pipeline (414) in the virtual switch (408) reflecting the received sets of packet processing instructions.
2. The method of claim 1, wherein the one or more rules correspond to flow rules and the packet processing pipeline (414) corresponds to a flow pipeline, optionally complying with an OpenFlow protocol.
3. The method of claim 1 or 2, wherein the plurality of cloud native packet processing functions (404) is distributed across multiple hosts in the cloud computing environment.
4. The method of any one of claims 1 to 3, wherein the packet processing pipeline comprises a plurality of individual pipeline portions (412) each representative of one of the sets of packet processing instructions.
5. The method of any one of claims 1 to 4, wherein the cloud native packet processing functions (404) are part of a service mesh of the cloud computing environment, wherein the packet processing pipeline (414) is implemented as an aggregate pipeline (414) reflecting a topology of the service mesh.
6. The method of claim 5, wherein the service mesh comprises the plurality of cloud native packet processing functions (404) and, optionally, one or more cloud native application functions (402) available in the cloud computing environment.
7. The method of claim 5 or 6 when dependent on claim 4, wherein the aggregate pipeline (414) is formed by linking the plurality of individual pipeline portions (412) in accordance with the topology of the service mesh.
8. The method of claim 7, wherein forming the aggregate pipeline (414) includes consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions (412) into common packet processing instructions of the packet processing pipeline (414).
9. The method of any one of claims 3 to 8, wherein each of the plurality of individual pipeline portions (414) is implemented in a sandbox in the virtual switch (408) to segregate execution of each of the sets of packet processing instructions.
10. The method of claim 9, wherein, for each sandbox, modifications to the set of packet processing instructions handled in the sandbox are restricted to be applied by the cloud native packet processing function (404) which provided the set of packet processing instructions.
11. The method of any one of claims 1 to 10, further comprising: translating port names received from the plurality of cloud native packet processing functions (404) into port names available at the virtual switch (408).
12. The method of any one of claims 1 to 11, wherein packet processing instructions reflected in the packet processing pipeline (414) are to be translated in the virtual switch (408) into packet processing operations suitable for offloading into a physical network interface (416) for direct processing of packets by the physical network interface (416).
13. The method of claim 12, wherein the packet processing operations suitable for offloading into the physical network interface (416) comprise one or more flow cache entries.
14. The method of any one of claims 1 to 13, wherein, by default, packets communicated through the cloud computing environment in flows are forwarded to the plurality of cloud native packet processing functions (404) to be processed by the plurality of cloud native packet processing functions (404), wherein the received sets of packet processing instructions relate to selected ones of the flows, and wherein configuring the virtual switch (408) based on the received sets of packet processing instructions overrules the default to process packets of the selected ones of the flows in the virtual switch (408) without forwarding them to one of the plurality of cloud native packet processing functions (404).
15. The method of claim 14, wherein overruling the default is performed at runtime of the virtual switch (408).
16. A method for implementing packet processing in a cloud computing environment, the method being performed by a virtual switch (408) of the cloud computing environment and comprising: applying (S702) a configuration to implement a packet processing pipeline (414) in the virtual switch (408) reflecting a plurality of sets of packet processing instructions each indicating one or more rules for processing of packets by the virtual switch (408), wherein each of the sets of packet processing instructions originates from one of a plurality of cloud native packet processing functions (404) and is provided to the virtual switch (408) from a virtual switch configurator (410).
17. The method of claim 16, wherein the one or more rules correspond to flow rules and the packet processing pipeline (414) corresponds to a flow pipeline, optionally complying with an OpenFlow protocol.
18. The method of claim 16 or 17, wherein the plurality of cloud native packet processing functions (404) is distributed across multiple hosts in the cloud computing environment.
19. The method of any one of claims 16 to 18, wherein the packet processing pipeline (414) comprises a plurality of individual pipeline portions (412) each representative of one of the sets of packet processing instructions.
20. The method of any one of claims 16 to 19, wherein the cloud native packet processing functions (404) are part of a service mesh of the cloud computing environment, wherein the packet processing pipeline (414) is implemented as an aggregate pipeline (414) reflecting a topology of the service mesh.
21. The method of claim 20, wherein the service mesh comprises the plurality of cloud native packet processing functions (404) and, optionally, one or more cloud native application functions (402) available in the cloud computing environment.
22. The method of claim 20 or 21 when dependent on claim 19, wherein the aggregate pipeline (414) is formed by linking the plurality of individual pipeline portions (412) in accordance with the topology of the service mesh.
23. The method of claim 22, wherein forming the aggregate pipeline (414) includes consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions (412) into common packet processing instructions of the packet processing pipeline (414).
24. The method of any one of claims 18 to 23, wherein each of the plurality of individual pipeline portions (412) is implemented in a sandbox in the virtual switch (408) to segregate execution of each of the sets of packet processing instructions.
25. The method of claim 24, wherein, for each sandbox, modifications to the set of packet processing instructions handled in the sandbox are restricted to be originated by the cloud native packet processing function (404) which provided the set of packet processing instructions.
26. The method of any one of claims 16 to 25, further comprising: translating packet processing instructions reflected in the packet processing pipeline (414) into packet processing operations suitable for offloading into a physical network interface (416) for direct processing of packets by the physical network interface (416); and offloading the translated packet processing instructions into the physical network interface (416).
27. The method of claim 26, wherein the packet processing operations suitable for offloading into the physical network interface (416) comprise one or more flow cache entries.
28. The method of any one of claims 16 to 27, wherein, by default, packets communicated through the cloud computing environment in flows are forwarded to the plurality of cloud native packet processing functions (404) to be processed by the plurality of cloud native packet processing functions (404), wherein the sets of packet processing instructions relate to selected ones of the flows, and wherein applying the configuration to reflect the sets of packet processing instructions overrules the default to process packets of the selected ones of the flows in the virtual switch (408) without forwarding them to one of the plurality of cloud native packet processing functions (404).
29. The method of claim 28, wherein overruling the default is performed at runtime of the virtual switch (408).
30. A method for implementing packet processing in a cloud computing environment, the method being performed by a cloud native packet processing function (404) and comprising: sending, to a virtual switch configurator (410), a set of packet processing instructions indicating one or more rules for processing of packets by a virtual switch (408) of the cloud computing environment, the virtual switch configurator (410) to configure the virtual switch (408) based on the set of packet processing instructions to implement a packet processing pipeline (414) in the virtual switch (408) reflecting the set of packet processing instructions.
31. The method of claim 30, wherein the one or more rules correspond to flow rules and the packet processing pipeline (414) corresponds to a flow pipeline, optionally complying with an OpenFlow protocol.
32. The method of claim 30 or 31, wherein the plurality of cloud native packet processing functions (404) is distributed across multiple hosts in the cloud computing environment.
33. The method of any one of claims 30 to 32, wherein the packet processing pipeline (414) comprises a plurality of individual pipeline portions (412) each representative of one of the sets of packet processing instructions.
34. The method of any one of claims 30 to 33, wherein the cloud native packet processing function (404) is one of a plurality of cloud native packet processing functions (404) which are part of a service mesh of the cloud computing environment, wherein the packet processing pipeline (414) is implemented as an aggregate pipeline (414) reflecting a topology of the service mesh.
35. The method of claim 34, wherein the service mesh comprises the plurality of cloud native packet processing functions (404) and, optionally, one or more cloud native application functions (402) available in the cloud computing environment.
36. The method of claim 34 or 35 when dependent on claim 33, wherein the aggregate pipeline (414) is formed by linking the plurality of individual pipeline portions (412) in accordance with the topology of the service mesh.
37. The method of claim 36, wherein forming the aggregate pipeline (414) includes consolidating at least some packet processing instructions of different ones of the plurality of individual pipeline portions (412) into common packet processing instructions of the packet processing pipeline (414).
38. The method of any one of claims 32 to 37, wherein each of the plurality of individual pipeline portions (412) is to be implemented in a sandbox in the virtual switch (408) to segregate execution of each of the sets of packet processing instructions.
39. The method of claim 38, wherein, for each sandbox, modifications to the set of packet processing instructions handled in the sandbox are restricted to be applied by the cloud native packet processing function (404) which provided the set of packet processing instructions.
40. The method of any one of claims 30 to 39, wherein packet processing instructions reflected in the packet processing pipeline (414) are to be translated in the virtual switch (408) into packet processing operations suitable for offloading into a physical network interface (416) for direct processing of packets by the physical network interface (416).
41. The method of claim 40, wherein the packet processing operations suitable for offloading into the physical network interface (416) comprise one or more flow cache entries.
42. The method of any one of claims 30 to 41, wherein, by default, packets communicated through the cloud computing environment in flows are forwarded to the plurality of cloud native packet processing functions (404) to be processed by the plurality of cloud native packet processing functions (404), wherein the set of packet processing instructions relates to a selected one of the flows, and wherein configuring the virtual switch (408) based on the set of packet processing instructions overrules the default to process packets of the selected one of the flows in the virtual switch (408) without forwarding them to the cloud native packet processing function (404).
43. The method of claim 42, wherein overruling the default is performed at runtime of the virtual switch (408).
44. A computer program product comprising program code portions for performing the method of any one of claims 1 to 43 when the computer program product is executed on one or more computing devices.
45. The computer program product of claim 44, stored on a computer readable recording medium.
46. A computing unit (200) configured to execute a virtual switch configurator (410) for implementing packet processing in a cloud computing environment, the computing unit (200) comprising at least one processor (202) and at least one memory (204), the at least one memory (204) containing instructions executable by the at least one processor (202) such that the virtual switch configurator (410) is operable to perform the method of any one of claims 1 to 15.
47. A computing unit (210) configured to execute a virtual switch (408) for implementing packet processing in a cloud computing environment, the computing unit (210) comprising at least one processor (212) and at least one memory (214), the at least one memory (214) containing instructions executable by the at least one processor (212) such that the virtual switch (408) is operable to perform the method of any one of claims 16 to 29.
48. A computing unit (220) configured to execute a cloud native packet processing function (404) for implementing packet processing in a cloud computing environment, the computing unit (220) comprising at least one processor (222) and at least one memory (224), the at least one memory (224) containing instructions executable by the at least one processor (222) such that the cloud native packet processing function (404) is operable to perform the method of any one of claims 30 to 43.
49. A system comprising a computing unit (200) according to claim 46 and at least one of a computing unit (210) according to claim 47 and a computing unit (220) according to claim 48.
PCT/EP2020/086141 2020-12-15 2020-12-15 Technique for implementing packet processing in a cloud computing environment WO2022128068A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/086141 WO2022128068A1 (en) 2020-12-15 2020-12-15 Technique for implementing packet processing in a cloud computing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2020/086141 WO2022128068A1 (en) 2020-12-15 2020-12-15 Technique for implementing packet processing in a cloud computing environment

Publications (1)

Publication Number Publication Date
WO2022128068A1 true WO2022128068A1 (en) 2022-06-23

Family

ID=73839041

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/086141 WO2022128068A1 (en) 2020-12-15 2020-12-15 Technique for implementing packet processing in a cloud computing environment

Country Status (1)

Country Link
WO (1) WO2022128068A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11558254B1 (en) * 2022-06-23 2023-01-17 Kong Inc. Configuration hash comparison

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3140964A1 (en) * 2014-05-05 2017-03-15 Telefonaktiebolaget LM Ericsson (publ) Implementing a 3g packet core in a cloud computer with openflow data and control planes
US20170180273A1 (en) * 2015-12-22 2017-06-22 Daniel Daly Accelerated network packet processing
WO2020019159A1 (en) * 2018-07-24 2020-01-30 Nokia Shanghai Bell Co., Ltd. Method, device and computer readable medium for delivering data-plane packets by using separate transport service vnfc

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3140964A1 (en) * 2014-05-05 2017-03-15 Telefonaktiebolaget LM Ericsson (publ) Implementing a 3g packet core in a cloud computer with openflow data and control planes
US20170180273A1 (en) * 2015-12-22 2017-06-22 Daniel Daly Accelerated network packet processing
WO2020019159A1 (en) * 2018-07-24 2020-01-30 Nokia Shanghai Bell Co., Ltd. Method, device and computer readable medium for delivering data-plane packets by using separate transport service vnfc

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "5G and the Cloud - A 5G Americas White Paper", 31 December 2019 (2019-12-31), pages 1 - 53, XP055844938, Retrieved from the Internet <URL:https://www.5gamericas.org/wp-content/uploads/2019/12/5G-Americas_5G-and-the-Cloud..pdf> [retrieved on 20210927] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11558254B1 (en) * 2022-06-23 2023-01-17 Kong Inc. Configuration hash comparison
US11792077B1 (en) 2022-06-23 2023-10-17 Kong Inc. Configuration hash comparison
US20230421442A1 (en) * 2022-06-23 2023-12-28 Kong Inc. Configuration hash comparison
US11996982B2 (en) 2022-06-23 2024-05-28 Kong Inc. Configuration hash comparison

Similar Documents

Publication Publication Date Title
CN113454971B (en) Service acceleration based on remote intelligent NIC
US10680948B2 (en) Hybrid packet processing
JP7281531B2 (en) Multi-cloud connectivity using SRv6 and BGP
US10862732B2 (en) Enhanced network virtualization using metadata in encapsulation header
CN110313163B (en) Load balancing in distributed computing systems
KR101969194B1 (en) Offloading packet processing for networking device virtualization
US10630710B2 (en) Systems and methods of stateless processing in a fault-tolerant microservice environment
US7804785B2 (en) Network system having an instructional sequence for performing packet processing and optimizing the packet processing
CN110838992B (en) System and method for transferring packets between kernel modules in different network stacks
CN111865806B (en) Prefix-based fat flows
Van Tu et al. Accelerating virtual network functions with fast-slow path architecture using express data path
JP2023543831A (en) Microservices-based service mesh system and service-oriented architecture management method
US20220166715A1 (en) Communication system and communication method
Katsikas et al. Metron: High-performance NFV service chaining even in the presence of blackboxes
Shiomoto Research challenges for network function virtualization-re-architecting middlebox for high performance and efficient, elastic and resilient platform to create new services
Moro et al. A framework for network function decomposition and deployment
WO2022128068A1 (en) Technique for implementing packet processing in a cloud computing environment
Nandugudi et al. Network function virtualization: through the looking-glass
Perino et al. A programmable data plane for heterogeneous NFV platforms
Ma et al. P4SFC: Service function chain offloading with programmable switches
KR101729945B1 (en) Method for supporting multi tunant by network system based on sdn
Keller et al. Reconfigurable nodes for future networks
Ruf et al. A scalable high-performance router platform supporting dynamic service extensibility on network and host processors
JP7381196B2 (en) Computer equipment and its operating method, computer program, recording medium, and cloud network system
Parola et al. Creating disaggregated network services with eBPF: The kubernetes network provider use case

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20824924

Country of ref document: EP

Kind code of ref document: A1