WO2018220426A1 - Method and system for packet processing of a distributed virtual network function (vnf) - Google Patents

Method and system for packet processing of a distributed virtual network function (vnf) Download PDF

Info

Publication number
WO2018220426A1
WO2018220426A1 PCT/IB2017/053217 IB2017053217W WO2018220426A1 WO 2018220426 A1 WO2018220426 A1 WO 2018220426A1 IB 2017053217 W IB2017053217 W IB 2017053217W WO 2018220426 A1 WO2018220426 A1 WO 2018220426A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing unit
packet
processing
vnf
network
Prior art date
Application number
PCT/IB2017/053217
Other languages
French (fr)
Inventor
Mark Hlady
Christopher LEDUC
Hani ELMALKY
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2017/053217 priority Critical patent/WO2018220426A1/en
Publication of WO2018220426A1 publication Critical patent/WO2018220426A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/306Route determination based on the nature of the carried application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/645Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/76Routing in software-defined topologies, e.g. routing between virtual machines

Definitions

  • VNF VIRTUAL NETWORK FUNCTION
  • Embodiments of the invention relate to the field of networking; and more specifically, relate to a method and system for packet processing of a virtual network function.
  • NFs network functions
  • COTS commercial off-the-shelf
  • a VNF may be implemented at various parts of a network, such as at a serving gateway (S-GW), a packet data network gateway (P-GW), a serving GPRS (general packet radio service) support node (SGSN), a gateway GPRS support node (GGSN), a broadband remote access server (BRAS), and a provider edge (PE) router.
  • VNFs can also be implemented to support various services (also called appliances, middleboxes, or service functions), such as content filter, deep packet inspection (DPI), logging/metering/charging/advanced charging, firewall (FW), virus scanning (VS), intrusion detection and prevention (IDP), and network address translation (NAT), and so forth.
  • DPI deep packet inspection
  • FW firewall
  • VS virus scanning
  • IDP intrusion detection and prevention
  • NAT network address translation
  • a VNF may be implemented over a plurality of processing units, and such VNF may be referred to as a distributed VNF.
  • VNF may be implemented over a plurality of processing units, and such VNF may be referred to as a distributed VNF.
  • efficient load balancing among the plurality of processing units is advantageous.
  • Embodiments of the invention include methods of packet processing in a distributed virtual network function (VNF).
  • a packet processing rule is installed for a first processing unit based on a determination that a VNF is implemented over a plurality of processing units including the first processing unit and a second processing unit.
  • the packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow.
  • a packet in the traffic flow is processed by the first processing unit based on the packet processing rule, and the packet is sent out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
  • Embodiments of the invention include electronic devices for packet processing in a distributed virtual network function (VNF).
  • an electronic device includes non-transitory machine-readable storage medium to store instructions and a processor coupled to the non-transitory machine -readable storage medium to process the stored instructions to perform operations.
  • the operations include installing a packet processing rule for a first processing unit based on a determination that a VNF is implemented over a plurality of processing units including the first processing unit and a second processing unit.
  • the packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow.
  • the operations include processing a packet in the traffic flow by the first processing unit based on the packet processing rule, and sending the packet out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
  • Embodiments of the invention include non-transitory machine-readable storage media for packet processing in a distributed virtual network function (VNF).
  • a machine -readable storage provides instructions, which when executed by a processor of an electronic device, cause the processor to perform operations.
  • the operations include installing a packet processing rule for a first processing unit based on a determination that a VNF is implemented over a plurality of processing units including the first processing unit and a second processing unit.
  • the packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow.
  • the operations include processing a packet in the traffic flow by the first processing unit based on the packet processing rule, and sending the packet out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
  • Embodiments of the invention offers efficient ways to process packets among a plurality of processing units of a virtual network function (VNF).
  • VNF virtual network function
  • FIG. 1A illustrates a network using virtual network function (VNF) per one embodiment of the invention.
  • VNF virtual network function
  • Figure IB illustrates a distributed VNF in a network per one embodiment of the invention.
  • Figure 2A illustrates load-balancing between two processing units within a distributed VNF per one embodiment of the invention.
  • Figure 2B illustrates a distributed VNF favoring local packet processing per one embodiment of the invention.
  • Figure 3 illustrates routing weights/costs for a distributed VNF per one embodiment of the invention.
  • Figure 4 illustrates LAG link weights/costs for a distributed VNF per one embodiment of the invention.
  • Figure 5 is a flow diagram illustrating weighted load-balancing among a plurality of processing units per one embodiment of the invention.
  • Figure 6A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, per some embodiments of the invention.
  • Figure 6B illustrates an exemplary way to implement a special-purpose network device per some embodiments of the invention.
  • FIG. 6C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled per some embodiments of the invention.
  • VNEs virtual network elements
  • Figure 6D illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), per some embodiments of the invention.
  • NE network element
  • Figure 6E illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), per some embodiments of the invention.
  • Figure 6F illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, per some embodiments of the invention.
  • Figure 7 illustrates a general-purpose control plane device with centralized control plane (CCP) software, per some embodiments of the invention.
  • CCP centralized control plane
  • VNF virtual network function
  • Bracketed text and blocks with dashed borders may be used to illustrate optional operations that add additional features to the embodiments of the invention. Such notation, however, should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in some embodiments of the invention.
  • references in the specification to "one embodiment,” “an embodiment,” “an example embodiment,” and so forth, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • Coupled is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other.
  • Connected is used to indicate the establishment of communication between two or more elements that are coupled with each other.
  • An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals).
  • machine -readable storage media e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory
  • machine-readable transmission media also called a carrier
  • carrier e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals.
  • an electronic device e.g., a computer
  • includes hardware and software such as a set of one or more processors (e.g., of which a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data.
  • processors e.g., of which a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding
  • an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed).
  • Typical electronic devices also include a set of one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices.
  • NI(s) physical network interface(s)
  • the set of physical NIs may perform any formatting, coding, or translating to allow the electronic device to send and receive data whether over a wired and/or a wireless connection.
  • a physical NI may comprise radio circuitry capable of (1) receiving data from other electronic devices over a wireless connection and/or (2) sending data out to other devices through a wireless connection.
  • This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication.
  • the radio circuitry may convert digital data into a radio signal having the proper parameters (e.g., frequency, timing, channel, bandwidth, and so forth).
  • the radio signal may then be transmitted through antennas to the appropriate recipient(s).
  • the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter.
  • NICs network interface controller
  • the NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate with wire through plugging in a cable to a physical port connected to a NIC.
  • One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • a network device is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices).
  • Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).
  • a network device may implement a set of network elements in some embodiments; and in alternative embodiments, a single network element may be implemented by a set of network devices.
  • packets may be forwarded within traffic flows (or simply referred to as flows), and a network element may forward the flows based on the network element's forwarding tables (e.g., routing tables or switch tables), which may be managed by one or more controllers.
  • the controllers may be referred to as network controllers or SDN controllers (the two terms are used interchangeably in the specification).
  • a flow may be defined as a set of packets whose headers match a given pattern of bits.
  • a flow may be identified by a set of attributes embedded to one or more packets of the flow.
  • An exemplary set of attributes includes one or more values in a 5-tuple (source and destination IP addresses, a protocol type, source and destination TCP/UDP ports).
  • Another exemplary set of attributes alternatively or additionally, includes International Standards Organization (OSI) layer 2 frame header information such as source/destination media access control (MAC) addresses and virtual local area network (VLAN) tag (e.g., IEEE 802.1Q tag).
  • OSI International Standards Organization
  • MAC media access control
  • VLAN virtual local area network
  • FIG. 1A illustrates a network using virtual network functions (VNFs) per one embodiment of the invention.
  • a network 100 includes a controller 120 managing a plurality of network elements, including network elements 140 and 146.
  • the network 100 may implement SDN architecture, in which case the controller 120 may be a SDN controller, and these network elements 140 and 146 may be implemented as OpenFlow switches when they comply with OpenFlow standards proposed by the Open Networking Foundation.
  • Each of the controller 120, and the network elements 140 and 146 is implemented in one or more network devices in one embodiment.
  • the network elements 140 and 146 may communicate through a network cloud 190, which may contain traditional network elements such as routers/switches or other SDN network elements.
  • the network elements 140 and 146, and the traditional network elements such as routers/switches or other SDN network elements in the network cloud 190, may host virtual network functions (VNFs) such as VNF1 142 and VNF2 144.
  • VNFs process subscribers' traffic by providing services such as content filter, deep packet inspection (DPI),
  • Each of these VNFs may be associated with one or more network elements (e.g., residing in or being coupled to the network elements) in the network cloud 190. Also, network elements 140 and 146 may host one or more of these or other VNFs.
  • incoming packets may be forwarded based on the header fields of packets or the header fields of frames that encapsulate the packets.
  • the packets may be sent to one or more VNFs associated with a first network element. After the one or more VNFs associated with the first network element process the packets, the first network element forwards the packets to a second network element for later VNFs in the service chain until the packets are processed by all VNFs of the service chain.
  • SFC service function chaining
  • a network element may instantiate a VNF with the goal of optimizing resource allocation in the network 100 for the service chain.
  • a network such as the network 100 may allocate VNFs to various network elements such as SDN network elements and traditional network elements.
  • Packet forwarding along with a service chain in a network may be performed in OSI Layer 2 or Layer 3.
  • Packet forwarding in Layer 3 may be based on equal-cost multi-path (ECMP) routing.
  • ECMP routing a packet may be forwarded from a source VNF and to a destination VNF over one of multiple equal cost paths. The ECMP routing may be determined based on the network elements that are associated with the source VNF and the destination VNF.
  • the ECMP routing implemented per one embodiment of the invention may comply with one or more of Internet Engineering Task Force (IETF) Request for Comments (RFC) 2991, entitled “Multipath Issues in Unicast and Multicast Next-Hop Selection," RFC 2992, entitled “Analysis of an Equal-Cost Multi-Path Algorithm,” and other industrial standards.
  • Each hop in the ECMP routing may be a network element that is associated with a VNF. Routing of a service chain of VNF1 - VNF2 - VNF3, for example, becomes ECMP routing between VNF1 and VNF2, and between VNF2 and VNF3.
  • the network elements hosting VNFl, VNF2, and VNF3 are three hops in the ECMP routing, and each VNF represents one hop.
  • Packet forwarding in Layer 2 may be based on link aggregation group (LAG).
  • LAG link aggregation group
  • a LAG aggregates multiple links to increase throughput beyond what a single link could support, and a LAG may be set up between a source VNF and a destination VNF where the LAG is aggregated from multiple links between the network element hosting the source VNF and the network element hosting the destination VNF.
  • packets are encapsulated in Layer 2 frames (e.g., Ethernet frames) in forwarding, thus packet forwarding in Layer 2 may be also referred to as frame forwarding.
  • a LAG may also apply at OSI Layer 3, and packet forwarding in a Layer 3 LAG may be based on Internet Protocol (IP) addresses of links.
  • IP Internet Protocol
  • the LAG forwarding implemented per one embodiment of the invention may comply with one or more of the Institute of Electrical and Electronics Engineers (IEEE) 802.1 AX, entitled “IEEE Standard for Local and Metropolitan Area Networks - Link Aggregation," IEEE 802. laq, entitled “IEEE Standard for Local and Metropolitan Area Networks - Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks,” and other industrial standards.
  • IEEE 802.1 AX entitled "IEEE Standard for Local and Metropolitan Area Networks - Link Aggregation”
  • IEEE 802. laq entitled “IEEE Standard for Local and Metropolitan Area Networks - Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks”
  • Each hop in the LAG forwarding may be a network element that is associated with a VNF. Packet forwarding of a service chain of VNFl - VNF2 - VNF3, becomes LAG forwarding between VNFl and VNF2, and between VNF2 and VNF3.
  • the network elements hosting VNFl, VNF2, and VNF3 are three hops
  • a VNF may be implemented over a plurality of processing units, and such VNF may be referred to as a distributed VNF.
  • each VNF representing one hop in packet forwarding has its limitation.
  • Figure IB illustrates a zoom-in view of the VNFl of Figure 1A per one embodiment of the invention.
  • the VNFl 142 is implemented over processing units 160 and 170.
  • Each of the processing units 160 and 170 may be a processor, a processor core, a virtual machine (VM), a software container (or simply container), or a unikernel.
  • VM virtual machine
  • VM software container
  • unikernel are explained in more details relating to the general-purpose network device 604 in Figure 6A.
  • each processing unit is implemented in a network device, and the VNFl 142 is implemented through multiple network devices that are included in a virtual network element such as the VNE 670T discussed in Figure 6F. While VNFl 142 contains only two processing units, a distributed VNF may contain more than two processing units, and embodiments of the invention are not limited to a particular number of processing units implementing a distributed VNF.
  • Each processing unit receives a packet through one or more ingress points, and after completion of the processing at the processing unit, the packet is forwarded to one or more egress points.
  • the packet is forwarded through one or more switch points.
  • the processing units 160 and 170 include ingress points 164 and 174, egress points 162 and 172, and switch points 166 and 176, respectively. While these ingress points, egress points, and switch points are shown within the processing units 160 and 170, one or more of these ingress points, egress points, and switch points may be outside the processing units 160 and 170.
  • the ingress points, egress points, and switch points have different embodiments in different scenarios.
  • the ingress points, egress points, and switch points may be referred to as the ingress ports, egress ports, and fabric ports, respectively; and in a Layer 3 network, the ingress points, egress points, and switch points may be referred to as the ingress interfaces (IFs), egress interfaces, and fabric interfaces, respectively.
  • IFs ingress interfaces
  • the ingress points, egress points, and switch points may be abstraction of physical components of a network device, thus these points may be virtual, and referred to such as virtual ports and virtual interfaces.
  • each of the ingress ports, egress ports, and fabric ports may be implemented using a physical port of a network interface card (NIC) of a network device implementing the network element hosting the VNF1 142.
  • NIC network interface card
  • One or more links connect the switch points 166 and 176.
  • the one or more links between the switch points 166 and 176 consume interconnect bandwidth between the processing units 160 and 170.
  • the switch points 166 and 176 may be implemented using physical ports of a network device as discussed above.
  • the switch points 166 and 176 while allowing the processing units 160 and 170 to work together to process packets of a traffic flow, consume resources of one or more network devices implementing the network element hosting the VNF1 142.
  • the processing units 160 and 170 finish processing packets, the packets are forwarded to a gateway 180 in one embodiment.
  • the gateway may be a separate entity such as a processor, a processor core, a VM, a container, a unikernel, or a network device.
  • the gateway is to forward the packets to the next destination, which may be the next VNF in a service chain.
  • the gateway is referred to as a default gateway (e.g., a router/switch).
  • the default gateway may accept packets from one VNF and distribute the packets to the next VNF.
  • the VNF1 does not send packets to the gateway, and the packets are forwarded directly from the egress points 162 and 172 to the next destination.
  • the controller 120 may coordinate packet forwarding of the VNFl 142, and the controller 120 may be implemented in a network device having one or more control processing units, where each control processing unit may be one of a processor, a processor core, a VM, a container, a unikernel, and a network device.
  • a VNF path/link computation element (PLCE) 124 may coordinate the packet forwarding of the VNFl 142.
  • the PLCE 124 may be implemented within or coupled to the controller 120.
  • the controller 120 communicates with the VNFl 142 through one or more communication links, and the communication is through a control plane between the controller 120 and the VNFl 142, in contrast to a forwarding plane (also referred to as data plane) including the VNFl 142 and one or more other VNFs, where packets are forwarded in between the VNFs to be executed.
  • a forwarding plane also referred to as data plane
  • Figure 6D provides more details regarding the control plane and the forwarding/data plane.
  • Representing a distributed VNF such as the VNFl 142 as one hop in packet forwarding has its limitation.
  • the single hop representation of a distributed VNF masks the distributed packet processing in multiple processing units.
  • the difference between packets being forwarded through the egress points 162 and 172 is masked, since either egress point may be used to exit the distributed VNFl 142.
  • Such abstraction may not sufficiently reflect the impact of load-balancing within a distributed VNF.
  • Figure 2A illustrates load-balancing between two processing units within a distributed VNF per one embodiment of the invention.
  • Figure 2A is similar to Figure IB, and the same references indicate elements or components having the same or similar functionalities.
  • Figure 2A illustrates execution units 168 and 178 in the processing units 160 and 170 respectively.
  • the execution units 168 and 178 may process packets (e.g., modifying packet headers/bodies, encapsulating the packets, duplicating the packets, etc.) before forwarding the packets to the egress points.
  • Each of the execution units may be implemented in one or more circuits.
  • packet processing may be load-balanced among the multiple processing units, so that packet processing workload may be distributed more evenly across the multiple processing units.
  • the load-balancing may be based on the packet header (or frame header).
  • a cryptographic hash function may be applied to the packet headers of the incoming packets, so that the incoming packets may be distributed among the multiple processing units.
  • the highlighted path in Figure 2A illustrates a traffic flow that is sourced from the processing unit 160 and load balanced through the processing unit 170.
  • a traffic flow that is sourced from the processing unit 160 and load balanced through the processing unit 170.
  • roughly half of the traffic flow is processed through the execution unit 168 of the processing unit 160, and the other half through the execution unit 178 of the processing unit 170.
  • the load balancing based execution within the distributed VNF1 142 appears to be efficient when the distributed VNF1 142 is considered as a single hop in packet forwarding of a traffic flow. From outside of the VNF1 142, processing of the packets through the execution units 168 or 178 appears to be the same, regardless of packets being received at the processing units 160 or 170 for the VNF1 142. Yet for packets received at the processing unit 160, the load-balancing of packets may cause the packets to be forwarded to another processing unit (such as the processing unit 170) different from the one at which the packets are received.
  • another processing unit such as the processing unit 170
  • the local processing unit of a distributed VNF for a traffic flow or packets of the traffic flow is the processing unit through which the traffic flow or the packets gain access to the distributed VNF.
  • the local processing unit may be the very first processing unit that the traffic flow or the packets encounters within multiple processing units of the distributed VNF.
  • the distributed packet processing using the remote processing unit, the switch points, and interconnect bandwidth of the connecting link, consumes more resources comparing to local packet processing, which uses only the local processing unit.
  • local packet processing (packets being processed by the local processing unit) causes packets to be forwarded through the ingress point 164, the execution unit 168, the egress point 162, and the gateway 180; and distributed packet processing (packet being processed by a remote processing unit), as highlighted, through the ingress point 164, the switch points 166 and 176 through a connecting link, the execution unit 178, the egress point 172, and the gateway 180. While both the local and distributed packet processing use ingress points, execution units, and egress points, the latter also uses switch points and interconnect bandwidth of the connecting link.
  • a subscriber (also referred to as a tenant or a customer) of the network 100 may use a distributed VNF such as the VNF1 142 in the subscriber's service chain.
  • the subscriber understands that packets are transmitted through ingress points such as the ingress points 164 and 174, thus the subscriber pays for the bandwidth through the ingress points.
  • the load- balancing means that the distributed VNF also uses bandwidth through switch points such as the switch points 166 and 176.
  • Each of the switch points consumes physical resources such as a physical network interface (NI) of a network device discussed above.
  • NI physical network interface
  • a distributed VNF with N processing units may implement even distribution of workload through load-balancing, which causes only 1/N of the workload to be processed by the local processing unit, while (N-l)/N of the load is to be processed by (N-1) remote processing units. Since each switch point to a remote processing unit occupies physical resources including not only physical network interfaces (NIs), but also bandwidth associated with these physical network interfaces. The value proposition of using the distributed VNF deteriorates due to resource consumption related to switch points.
  • NIs physical network interfaces
  • the load-balancing through switch points to remote processing units may involve a longer path as the highlighted path in Figure 2A shows, and the longer path may impact the quality of service (QoS) of the packets being processed.
  • QoS quality of service
  • the longer path may cause additional packet delay, packet jitter, and/or more chances of packet drop.
  • load-balancing among processing units of the distributed VNF may incur additional costs in physical resource consumption and/or impact on QoS of the packets being processed, and such additional costs may make a distributed VNF less efficient.
  • a packet forwarding policy may favor, prefer, or prioritize (e.g., weight in favor) a local processing unit over a remote processing unit when processing packets of a traffic flow.
  • the packet forwarding policy may favor a particular remote processing unit over both the local processing unit and other remote processing unit(s) (e.g., the particular remote processing unit has execution resources that the local and other remote processing units do not).
  • Figure 2B illustrates a distributed VNF favoring local packet processing per one embodiment of the invention.
  • Figure 2B is similar to Figure 2A, and the same references indicate elements or components having the same or similar functionalities.
  • the packet forwarding policy favors a local processing unit, the processing unit 160 in this case.
  • the weighted execution favors the local processing unit, and packet processing more likely takes the highlighted path in Figure 2B.
  • the weighted execution is in contrast to evenly distributed load balancing between the multiple processing units of a distributed VNF, and the local processing unit may carry more weight (or less cost), so that between a local processing unit and a remote processing unit, packet processing of a traffic flow in the distributed VNF1 142 is more likely to take the highlighted path in Figure 2B than the highlighted path in Figure 2A.
  • the weighted execution of a distributed VNF may consider a remote processing unit as an internal hop, thus the distributed VNF is no longer viewed as a single hop in packet forwarding. The difference between packets being forwarded through the egress points 162 and 172 may be expressed in different routing weights/costs.
  • the weighted execution of the distributed VNF better reflects the costs of distributed packet processing.
  • a distributed VNF with N processing units may have N-1 internal hops (when one traffic flow is to be received at the first processing unit and processed by the last processing unit in the VNF, and each processing unit has only one or two switch points), and the weighted execution may indicate the cost of load-balancing across all N processing units, such weighted execution reflects the cost of 2*(N-1) switch points passing the workload.
  • the weighted execution of the distributed VNF reflects the additional costs of remote packet processing in physical resource consumption and/or impact on QoS of the packets being processed, the weighted execution may utilize network resources consumed by a distributed VNF more efficiently than evenly distributed load-balancing between the multiple processing units of the distributed VNF.
  • the weighted execution of the distributed VNF may be advantageous when a particular remote processing unit is preferable over the local processing unit and other remote processing unit(s). For example, when the particular remote processing unit has execution resources that the local and other remote processing units do not (or more of the execution resources).
  • the execution resources may be available bandwidth/time slot or particular execution capability (e.g., supporting out-of-order execution). In that case, a weighted execution will more likely forward the packets to the particular execution unit. The additional cost of such forwarding will be only one internal hop to the particular execution unit (instead of in the evenly distributed execution case where packets are also forwarded to other processing units, which then forwards to the particular execution unit).
  • the weighted execution of a distributed VNF may be implemented in a Layer 3 network using ECMP routing.
  • a controller such as the controller 120 may run interior gateway protocol (IGP) and/or exterior gateway protocol (EGP) to compute ECMP routing for the processing units within a distributed VNF, now that the distributed VNF may be viewed as containing multiple hops.
  • the computation of the ECMP routing may use routing tables.
  • the routing tables may be computed by the controller and stored in the controller; the routing tables may be computed by the controller and installed in processing units of a distributed VNF such as the processing units 160 and 170; or the routing tables may be computed by the processing units themselves and installed in the processing units.
  • the embodiments of the invention cover all different ways the routing tables are computed and installed.
  • FIG. 3 illustrates routing weights/costs for a distributed VNF per one embodiment of the invention.
  • the routing weights/costs per processing unit 350 is for routing of the processing unit 160.
  • the routing weights/costs may be maintained at a controller such as the controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; or it may be maintained at the network element hosting the VNF1 142.
  • the weights/costs may be utilized when a routing protocol is to generate a routing table for the processing unit 160.
  • Each processing unit may have a set of routing weights/costs corresponds to its Layer 3 routing.
  • the processing unit 160 may process packets of the traffic flow through one route, e.g., packets of the traffic flow 1 being processed through the ingress point 164, the execution unit 168, the egress point 162, and the gateway 180, as illustrated as the highlighted path in Figure 2B.
  • a weight/cost ofxl is assigned for the local route (indicated as a local interface).
  • the remote routes for the traffic flows may go through different fabric interfaces (switch points) to remote processing units such as fabric interface 1 to the processing unit 170 at weight/cost x2 (the route is illustrated as the highlighted path in Figure 2A), and fabric interface 2 to a processing unit 190 (not shown) at weight/cost x3.
  • a remote processing unit e.g., processing unit 190
  • the weight/cost x3 will reflect the preference (e.g., having the highest weight or lowest cost among xl, x2, x3, and others).
  • each fabric interface consumes physical resources such as physical network interfaces (NI) and bandwidth associated with the physical network interfaces as discussed above. Yet subscriber may not understand the additional cost of paying for the fabric interface; thus, in some embodiment, only one or a few remote fabric interfaces are provided for one processing unit.
  • NI physical network interfaces
  • the weights/costs xl, x2, and x3 are configured differently depending on the routine protocol being used.
  • the weight/costs may be configured to favor local processing units over remote processing units or favor one remote processing unit over all other remote processing units/local processing unit.
  • the ones consuming less physical resources and/or having less impact on packet QoS may be favored over the ones consuming more physical resources and/or having less impact on packet QoS.
  • the routing is computed for a traffic flow
  • the weights/costs of different paths through different interfaces are applied, thus the resulting routing table may cause packets to favor a local processing unit over a remote processing unit, so that load-balancing among the multiple processing units in a distributed VNF is no longer evenly distributed among the multiple processing units.
  • Each of the weights/costs in the routing weights/costs per processing unit 350 may be a value of a rational or irrational number.
  • the weight corresponding to a local processing unit may be higher than the weight corresponding to a remote processing unit (e.g., the columns for the fabric interfaces indicating weights lower than that of the local route).
  • the cost corresponding to a local processing unit may be lower than that of a remote processing unit.
  • the weights/costs reflect such preference also.
  • the weight corresponding to a remote processing unit may be set to zero in one embodiment. This indicates that the corresponding remote processing unit is not to be used in normal operation. When the local processing unit becomes unavailable due to failure/congestion, however, packets of the traffic flow will be forwarded to the corresponding remote processing unit.
  • the advantage of indicating a zero-weighted remote processing unit includes that the existing routing table may be used as it is when the local processing unit becomes unavailable, instead of waiting for the routing table to be updated, which may take time and resource to complete.
  • modified ECMP routes may be built for each processing unit of a distributed VNF based on the routing weights/costs for that processing unit and/or other processing units of the distributed VNF.
  • the modified ECMP routes are different from original ECMP routes for the distributed VNF in that the original ECMP routes are computed without adding the routing weights/costs.
  • the ECMP routes may be maintained at a controller such as the controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; they may be maintained at the network element hosting the VNF1 142; or they may be maintained at both the controller and at the network element hosting the VNF 1 142.
  • the controller managing a processing unit of a distributed VNF provides both the original ECMP routes and the modified ECMP routes to the network element hosting the processing unit.
  • the network element installs separate routing tables for the original ECMP routes and the modified ECMP routes, and the network element may determine which routing table to use for a given traffic flow at the processing unit.
  • the weighted execution of a distributed VNF may be implemented using LAG forwarding, in either Layer 2 or Layer 3.
  • the link aggregation of links is based on the configuration of available links, including the links connecting ingress points, egress points, and the switch points.
  • a controller such as the controller 120 or the one or more network elements hosting a distributed VNF may utilize LAG protocols such as Link Aggregation Control Protocol (LACP) or Distributed Relay Control Protocol (DRCP) to aggregate links to forward packets.
  • LACP Link Aggregation Control Protocol
  • DRCP Distributed Relay Control Protocol
  • embodiments of the invention differentiate the local processing unit (which processes packets entering a distributed VNF at the processing unit) and the remote processing unit (which processes packets entering a distributed VNF at a different processing unit).
  • the links local to a processing unit are referred to as local links; in contrast, the links connecting to a remote processing unit are referred to as remote links.
  • links implementing the ingress point 164 and egress point 162 are local links and links implementing the switch point 166 are remote links.
  • the network element hosting a distributed VNF may install different LAG tables for the different processing units of the distributed VNF.
  • the LAG table for the processing unit 160 includes only local links of the processing unit 160
  • the LAG table for the processing unit 170 includes only local links of the processing unit 170.
  • a processing unit of the distributed VNF contains no local links (, a LAG table including all remote links of the distributed VNF is installed for the processing unit.
  • FIG. 4 illustrates LAG link weights/costs for a distributed VNF per one embodiment of the invention.
  • the LAG link weights/costs per processing unit 450 is for LAG forwarding of the processing unit 160.
  • the LAG link weights/costs may be maintained at a controller such as the controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; they may be maintained at the network element hosting the VNF1 142; or they may be maintained at both the controller and at the network element hosting the VNF 1 142.
  • the weights/costs may be utilized to generate a LAG table for the processing unit 160.
  • Each processing unit may have a set of routing weights/costs corresponds to its LAG forwarding.
  • a weight/cost yl is assigned to processing unit 160's local links.
  • a weight/cost y2 is assigned to processing unit 160's remote links to connect to the remote processing unit 170; and a weight/cost y3 is assigned to processing unit 160's remote links to connect to the remote processing unit 190.
  • the weights/costs yl, y2, and y3 are configured differently depending on the LAG implementation.
  • the weight/costs may be configured to favor local links over remote links, weight/costs may be also configured to favor particular remote links over other remote links/local links.
  • the ones consuming less physical resources and/or having less impact on packet QoS may be favored over the ones consuming more physical resources and/or having less impact on packet QoS in one embodiment.
  • Each of the weights/costs in the LAG link weights/costs per processing unit 450 may be a value of a rational or irrational number. When weight is indicated, the weight corresponding to the local LAG links may be higher than the weights corresponding to remote LAG links. In contrast, when cost is indicated, the opposite occurs, and the cost corresponding to local LAG links may be lower than that of remote LAG links. When particular remote links are favored over all other remote links/local links, the weights/costs reflect such preference also.
  • the weight corresponding to remote LAG links may be set to zero in one embodiment. This indicates that the corresponding remote LAG links are not to be used in normal operation. When the local links become unavailable due to failure/congestion, however, packets of the traffic flow will be forwarded to the corresponding remote LAG links.
  • modified LAG table may be built for each processing unit of a distributed VNF based on the routing weights/costs for that processing unit and/or other processing units of the distributed VNF.
  • the modified ECMP routes are different from original ECMP routes for the distributed VNF in that the original ECMP routes are computed without adding the routing weights/costs.
  • the ECMP routes may be maintained at a controller such as controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; they may be maintained at the network element hosting the VNF1 142; or they may be maintained at both the controller and at the network element hosting the VNF 1 142.
  • the controller managing a processing unit of a distributed VNF provides both the original LAG table and the modified LAG table to the network element hosting the processing unit.
  • the network element installs separate LAG tables for the original LAG forwarding and the modified LAG forwarding, and the network device may determine which LAG table to use for a given traffic flow at the processing unit.
  • each set of remote links consumes physical resources such as physical network interfaces (NI) and bandwidth associated with the physical network interfaces as discussed above. Yet the subscriber may not understand the additional cost of paying for the remote links, thus, in some embodiment, only one or a few sets of remote links are provided for one processing unit.
  • NI physical network interfaces
  • the weights/costs in Figures 3-4 are Boolean values, where the local interface and local LAG links have a value of true, and other interfaces and remote LAG links have a value of false, and based on the Boolean values, routing table and LAG table may be generated for weighted load-balancing among multiple processing units of a distributed VNF, similar to the weighted load-balancing performed when the weights/costs are rational or irrational values discussed above.
  • FIG. 5 is a flow diagram illustrating weighted load-balancing among a plurality of processing units per one embodiment of the invention.
  • Method 500 may be performed in a controller such as the controller 120, or in the controller and a network element such as the one implementing the distributed VNF1 142. Both controller and network element may be implemented in one or more network devices as discussed above.
  • the network element When the method 500 is performed in a network element hosting a distributed VNF, the network element optionally receives a message from a controller (referred to as a controlling network device when the controller is implemented in a network device) at reference 502.
  • the message indicates a request to install a packet processing rule for a VNF.
  • the VNF is a distributed VNF that is implemented over a plurality of processing units including a first processing unit and a second processing unit.
  • the packet processing rule is installed for the first processing unit based on a determination that the VNF is implemented over the plurality of processing units.
  • the packet processing rule for the VNF prefers, favors, or prioritizes completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow.
  • the first processing unit is the processing unit by which the traffic flow enters the VNF, thus the first processing unit is the local processing unit while the second processing unit is the remote processing unit.
  • the packet processing rule favors the local processing unit over the remote processing unit in one embodiment. In that embodiment, the local processing unit is the designated processing unit by default.
  • a remote processing unit is the designated processing unit for the traffic flow.
  • the remote processing unit may include processing resources that the local processing unit and the other processing units do not include (or may have more processing resources than the local processing unit and the other processing units), thus the remote processing unit is preferable.
  • the preference may be indicated through a weight/cost associated with the local/remote processing unit.
  • the packet processing rule indicates a hop between the first processing unit and the second processing unit, wherein the hop represents a cost in processing packets.
  • the packet processing rule indicates weights/costs of completing packet processing of the traffic flow. For example, one weight/cost may indicate the weight/cost of completing packet processing of the traffic flow at the processing unit by which the packets of the traffic flow are received for the VNF (the local processing unit), and another weight/cost may indicate the weight/cost of completing packet processing of the traffic flow at a different processing unit (the remote processing unit) when the packets of the traffic flow are received from another processing unit of the VNF.
  • the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 3 equal-cost multi-path (ECMP) routing is less costly than using the second processing unit in the ISO Layer 3 ECMP routing.
  • the packet processing rule may be an entry in a routing table for ECMP routing, and the entry in the routing table is generated based on the weights/costs associated with the local and remote processing units.
  • a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 3 ECMP routing, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 3 ECMP routing.
  • the numerical non-zero value indicates the first processing unit carrying more weight (or incurring less cost) in packet forwarding/routing than the second processing unit with the zero weight.
  • the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 2 link aggregation group (LAG) is less costly than using the second processing unit in the ISO Layer 2 LAG.
  • the packet processing rule may be an entry in a LAG table for LAG forwarding, and the entry in the routing table is generated based on the weights/costs associated with the local and remote LAG links.
  • a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 2 LAG, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 2 LAG.
  • the numerical non-zero value indicates the first processing unit carrying more weight (or incurring less cost) in packet forwarding/routing than the second processing unit with the zero weight.
  • the packet processing rule is installed at the network element such as the one implementing the distributed VNF1 142.
  • the packet processing rule is installed at the controller such as the controller 120.
  • the first processing unit processes a packet in the traffic flow based on the packet processing rule.
  • the first processing unit identifies the packet as in the traffic flow based on the packet/frame header of the packet. Since the packet processing rule prefers the first processing unit to the second processing unit, the first processing unit does not forward the packet to the second processing unit for processing, but completes packet processing of the packet in the first processing unit instead.
  • the weight corresponding to the second processing unit may be configured to be zero.
  • a high percentage (but not all) of packets of the traffic flow are processed by the first processing unit, and the rest is forwarded to the second processing unit to be processed.
  • the weight corresponding to the second processing unit may be configured to a value smaller than the weight corresponding to the first processing unit.
  • a cost corresponding to the second processing unit may be configured to a value greater than the cost corresponding to the first processing unit.
  • the packet is sent out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
  • the egress point is egress point 162, when the first processing unit is the processing unit 160.
  • the egress point may be coupled to a gateway of the VNF such as the gateway 180. From the gateway 180, the packet is forwarded out of the VNF.
  • Each of the first and second processing unit is one of a processor, a processor core, a VM, a container, a unikernel, and a network device in one embodiment.
  • the packet in the traffic flow will be forwarded to the second processing unit based on the packet processing rule. Once the processing of the packet is completed in the second processing unit, the packet will be sent out of the egress point of the second processing unit.
  • a distributed VNF is no longer viewed as a single hop in packet forwarding in one embodiment.
  • the resource consumption and impact on QoS to forward packets to a remote processing unit may be reflected using the packet processing rule.
  • the packet processing rule may apply weights/costs to generate packet forwarding tables such as the routing tables and LAG tables discussed above.
  • load-balancing in a distributed VNF is more efficient in embodiments of the invention.
  • zero weight is applied to values corresponding to a remote processing unit.
  • Such application causes a distributed VNF to process packets using a local processing unit during normal operations, but process packets using remote processing unit(s) when the local processing unit is unavailable.
  • zero-weighting allows a distributed VNF to mitigate failure/congestion cases without updating forwarding tables.
  • Figure 6A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, per some embodiments of the invention.
  • Figure 6A shows NDs 600A-H, and their connectivity by way of lines between 600A- 600B, 600B-600C, 600C-600D, 600D-600E, 600E-600F, 600F-600G, and 600A-600G, as well as between 600H and each of 600A, 600C, 600D, and 600G.
  • These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link).
  • NDs 600A, 600E, and 600F An additional line extending from NDs 600A, 600E, and 600F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
  • Two of the exemplary ND implementations in Figure 6 A are: 1) a special-purpose network device 602 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general-purpose network device 604 that uses common off-the-shelf (COTS) processors and a standard OS.
  • ASICs application-specific integrated-circuits
  • OS special-purpose operating system
  • COTS common off-the-shelf
  • the special-purpose network device 602 includes networking hardware 610 comprising a set of one or more processor(s) or processor core(s) 612, forwarding resource(s) 614 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 616 (through which network connections are made, such as those shown by the connectivity between NDs 600A-H), as well as non-transitory machine -readable storage media 618 having stored therein networking software 620.
  • the networking software 620 may be executed by the networking hardware 610 to instantiate a set of one or more networking software instance(s) 622.
  • Each of the networking software instance(s) 622, and that part of the networking hardware 610 that executes that network software instance form a separate virtual network element 630A-R.
  • Each of the virtual network element(s) (VNEs) 630A-R includes a control communication and configuration module 632A- R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 634A-R, such that a given virtual network element (e.g., 630A) includes the control communication and configuration module (e.g., 632A), a set of one or more forwarding table(s) (e.g., 634A), and that portion of the networking hardware 610 that executes the virtual network element (e.g., 630A).
  • the networking software 620 includes one or more VNFs such as VNF1 142, and each network element may include a VNF instance such as VNF Instance (VI) 621A and VI 621R.
  • the special-purpose network device 602 is often physically and/or logically considered to include: 1) a ND control plane 624 (sometimes referred to as a control plane) comprising the processor(s) or processor core(s) 612 that execute the control communication and configuration module(s) 632A-R; and 2) a ND forwarding plane 626 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 614 that utilize the forwarding table(s) 634A-R and the physical NIs 616.
  • a ND control plane 624 (sometimes referred to as a control plane) comprising the processor(s) or processor core(s) 612 that execute the control communication and configuration module(s) 632A-R
  • a ND forwarding plane 626 sometimes referred to as a forwarding plane, a data plane, or a media plane
  • the forwarding resource(s) 614 that utilize the forwarding table(s) 634A-R and the physical
  • the ND control plane 624 (the processor(s) or processor core(s) 612 executing the control communication and configuration module(s) 632A- R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 634A-R, and the ND forwarding plane 626 is responsible for receiving that data on the physical NIs 616 and forwarding that data out the appropriate ones of the physical NIs 616 based on the forwarding table(s) 634A-R.
  • data e.g., packets
  • the ND forwarding plane 626 is responsible for receiving that data on the physical NIs 616 and forwarding that data out the appropriate ones of the physical NIs 616 based on the forwarding table(s) 634A-R.
  • Figure 6B illustrates an exemplary way to implement the special-purpose network device 602 per some embodiments of the invention.
  • Figure 6B shows a special-purpose network device including cards 638 (typically hot pluggable). While in some embodiments the cards 638 are of two types (one or more that operate as the ND forwarding plane 626 (sometimes called line cards), and one or more that operate to implement the ND control plane 624 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card).
  • additional card types e.g., one additional type of card is called a service card, resource card, or multi-application card.
  • a service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer- to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)).
  • Layer 4 to Layer 7 services e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer- to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)
  • GPRS General Pack
  • the general-purpose network device 604 includes hardware 640 comprising a set of one or more processor(s) 642 (which are often COTS processors or processor cores) and physical NIs 646, as well as non-transitory machine-readable storage media 648 having stored therein software 650.
  • the processor(s) 642 execute the software 650 to instantiate one or more sets of one or more applications 664A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization.
  • the virtualization layer 654 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 662A-R called software containers that may each be used to execute one (or more) of the sets of applications 664A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is ran; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes.
  • the multiple software containers also called virtualization engines, virtual private servers, or jails
  • user spaces typically a virtual memory space
  • the virtualization layer 654 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 664A-R is run on top of a guest operating system within an instance 662A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes.
  • a hypervisor sometimes referred to as a virtual machine monitor (VMM)
  • VMM virtual machine monitor
  • one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application.
  • libraries e.g., from a library operating system (LibOS) including drivers/libraries of OS services
  • unikernel can be implemented to run directly on hardware 640, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container
  • embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 654, unikernels running within software containers represented by instances 662A-R, or as a combination of unikernels and the above-described techniques (e.g. , unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).
  • the instantiation of the one or more sets of one or more applications 664A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 652.
  • the networking software 650 includes one or more VNFs such as VNF1 142, and each network element of network elements 660A-660R may include a VNF instance.
  • the virtual network element(s) 660A-R perform similar functionality to the virtual network element(s) 630A-R - e.g., similar to the control communication and configuration module(s) 632A and forwarding table(s) 634A (this virtualization of the hardware 640 is sometimes referred to as network function virtualization (NFV)).
  • NFV network function virtualization
  • CPE customer premise equipment
  • each instance 662A-R corresponding to one VNE 660A-R
  • alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 662A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
  • the virtualization layer 654 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 662A-R and the physical NI(s) 646, as well as optionally between the instances 662A-R; in addition, this virtual switch may enforce network isolation between the VNEs 660A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
  • VLANs virtual local area networks
  • the third exemplary ND implementation in Figure 6A is a hybrid network device 606, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND.
  • a platform VM i.e., a VM that implements the functionality of the special-purpose network device 602 could provide for para-virtualization to the networking hardware present in the hybrid network device 606.
  • NE network element
  • each of the VNEs receives data on the physical NIs (e.g., 616, 646) and forwards that data out to the appropriate ones of the physical NIs (e.g., 616, 646).
  • a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port” and “destination port” refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • transport protocol e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
  • UDP user datagram protocol
  • TCP Transmission Control Protocol
  • DSCP differentiated services code point
  • Figure 6C illustrates various exemplary ways in which VNEs may be coupled per some embodiments of the invention.
  • Figure 6C shows VNEs 670A.1-670A.P (and optionally VNEs 670A.Q-670A.R) implemented in ND 600A and VNE 670H.1 in ND 600H.
  • VNEs 670A.1-P are separate from each other in the sense that they can receive packets from outside ND 600A and forward packets outside of ND 600A; VNE 670A.1 is coupled with VNE 670H.1, and thus they communicate packets between their respective NDs; VNE 670A.2- 670A.3 may optionally forward packets between themselves without forwarding them outside of the ND 600A; and VNE 670A.P may optionally be the first in a chain of VNEs that includes VNE 670A.Q followed by VNE 670A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services). While Figure 6C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different V
  • the NDs of Figure 6A may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services.
  • end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances
  • VOIP
  • Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g., username/password accessed webpages providing email services), and/or corporate networks over VPNs.
  • end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers.
  • one or more of the electronic devices operating as the NDs in Figure 6A may also host one or more such servers (e.g., in the case of the general purpose-network device 604, one or more of the software instances 662A-R may operate as servers; the same would be true for the hybrid network device 606; in the case of the special-purpose network device 602, one or more such servers could also be run on a virtualization layer executed by the processor(s) or processor core(s) 612); in which case the servers are said to be co-located with the VNEs of that ND.
  • the servers are said to be co-located with the VNEs of that ND.
  • a virtual network is a logical abstraction of a physical network (such as that in Figure 6A) that provides network services (e.g., L2 and/or L3 services).
  • a virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
  • IP Internet Protocol
  • a network virtualization edge sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network.
  • a virtual network instance is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND).
  • a virtual access point is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
  • Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IP VPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)).
  • Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
  • quality of service capabilities e.g., traffic classification marking, traffic conditioning and scheduling
  • security capabilities e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements
  • management capabilities e.g., full detection and processing
  • Figure 6D illustrates a network with a single network element on each of the NDs of Figure 6A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), per some embodiments of the invention.
  • Figure 6D illustrates network elements (NEs) 670A-H with the same connectivity as the NDs 600A-H of Figure 6A.
  • Figure 6D illustrates that the distributed approach 672 distributes responsibility for generating the reachability and forwarding information across the NEs 670A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
  • the control communication and configuration module(s) 632A-R of the ND control plane 624 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
  • Border Gateway Protocol BGP
  • IGP Interior Gateway Protocol
  • OSPF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • RIP Routing Information Protocol
  • LDP Label Distribution Protocol
  • RSVP Resource Reservation Protocol
  • the NEs 670A-H e.g., the processor(s) 612 executing the control communication and configuration module(s) 632A-R
  • the NEs 670A-H perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by
  • Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 624.
  • the ND control plane 624 programs the ND forwarding plane 626 with information (e.g., adjacency and route information) based on the routing structure(s).
  • the ND control plane 624 programs the adjacency and route information into one or more forwarding table(s) 634A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 626.
  • the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 602, the same distributed approach 672 can be implemented on the general-purpose network device 604 and the hybrid network device 606.
  • Figure 6D illustrates that a centralized approach 674 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination.
  • the illustrated centralized approach 674 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 676 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized.
  • a centralized control plane 676 sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity
  • the centralized control plane 676 has a south bound interface 682 with a data plane 680 (sometimes referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 670A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes).
  • the centralized control plane 676 includes a network controller 678, which includes a centralized reachability and forwarding information module 679 that determines the reachability within the network and distributes the forwarding information to the NEs 670A-H of the data plane 680 over the south bound interface 682 (which may use the OpenFlow protocol).
  • the network intelligence is centralized in the centralized control plane 676 executing on electronic devices that are typically separate from the NDs.
  • each of the control communication and configuration module(s) 632A-R of the ND control plane 624 typically include a control agent that provides the VNE side of the south bound interface 682.
  • the ND control plane 624 (the processor(s) 612 executing the control communication and configuration module(s) 632A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 676 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 679 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 632A-R, in addition to communicating with the centralized control plane 676, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 674, but may also be considered a hybrid approach).
  • data e.g., packets
  • the control agent communicating with the centralized control plane 676 to receive the forwarding
  • the same centralized approach 674 can be implemented with the general-purpose network device 604 (e.g., each of the VNE 660A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 676 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 679; it should be understood that in some embodiments of the invention, the VNEs 660A-R, in addition to communicating with the centralized control plane 676, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 606.
  • the general-purpose network device 604 e.g., each of the VNE 660A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next
  • NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run
  • NFV and SDN both aim to make use of commodity server hardware and physical switches.
  • Figure 6D also shows that the centralized control plane 676 has a north bound interface 684 to an application layer 686, in which resides application(s) 688.
  • the centralized control plane 676 has the ability to form virtual networks 692 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 670A-H of the data plane 680 being the underlay network)) for the application(s) 688.
  • virtual networks 692 sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 670A-H of the data plane 680 being the underlay network)
  • the centralized control plane 676 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
  • FIG. 6D shows the distributed approach 672 separate from the centralized approach 674
  • the effort of network control may be distributed differently or the two combined in certain embodiments of the invention.
  • embodiments may generally use the centralized approach (SDN) 674, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree.
  • Such embodiments are generally considered to fall under the centralized approach 674, but may also be considered a hybrid approach.
  • the VNF path/link computation element (PLCE) 124 discussed above may be a part of the network controller 678, which may perform functions similar to those of the controller 120 in one embodiment.
  • PLCE VNF path/link computation element
  • Figure 6D illustrates the simple case where each of the NDs 600A-H implements a single NE 670A-H
  • the network control approaches described with reference to Figure 6D also work for networks where one or more of the NDs 600A-H implement multiple VNEs (e.g., VNEs 630A-R, VNEs 660A-R, those in the hybrid network device 606).
  • the network controller 678 may also emulate the implementation of multiple VNEs in a single ND. Specifically, instead of (or in addition to) implementing multiple VNEs in a single ND, the network controller 678 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 692 (all in the same one of the virtual network(s) 692, each in different ones of the virtual network(s) 692, or some combination).
  • the network controller 678 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 676 to present different VNEs in the virtual network(s) 692 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • a single VNE a NE
  • the network controller 678 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 676 to present different VNEs in the virtual network(s) 692 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
  • Figures 6E and 6F respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 678 may present as part of different ones of the virtual networks 692.
  • Figure 6E illustrates the simple case of where each of the NDs 600A-H implements a single NE 670A-H (see Figure 6D), but the centralized control plane 676 has abstracted multiple of the NEs in different NDs (the NEs 670A-C and G-H) into (to represent) a single NE 6701 in one of the virtual network(s) 692 of Figure 6D, per some embodiments of the invention.
  • Figure 6E shows that in this virtual network, the NE 6701 is coupled to NE 670D and 670F, which are both still coupled to NE 670E.
  • Figure 6F illustrates a case where multiple VNEs (VNE 670A.1 and VNE 670H.1) are implemented on different NDs (ND 600A and ND 600H) and are coupled to each other, and where the centralized control plane 676 has abstracted these multiple VNEs such that they appear as a single VNE 670T within one of the virtual networks 692 of Figure 6D, per some embodiments of the invention.
  • the abstraction of a NE or VNE can span multiple NDs.
  • the electronic device(s) running the centralized control plane 676 may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device).
  • FIG. 7 illustrates, a general-purpose control plane device 704 including hardware 740 comprising a set of one or more processor(s) 742 (which are often COTS processors) and physical NIs 746, as well as non-transitory machine -readable storage media 748 having stored therein centralized control plane (CCP) software 750.
  • the CCP software 750 includes the VNF PLCE 124 discussed above.
  • the processor(s) 742 typically execute software to instantiate a virtualization layer 754 (e.g., in one embodiment the virtualization layer 754 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 762A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 754 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 762A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor; in another embodiment, an application is implemented as a unikernel, which can be generated by compiling directly with an application only a limited
  • VMM virtual machine monitor
  • CCP instance 776A an instance of the CCP software 750 (illustrated as CCP instance 776A) is executed (e.g., within the instance 762A) on the virtualization layer 754.
  • CCP instance 776A is executed, as a unikernel or on top of a host operating system, on the "bare metal" general-purpose control plane device 704.
  • the instantiation of the CCP instance 776A, as well as the virtualization layer 754 and instances 762A-R if implemented, are collectively referred to as software instance(s) 752.
  • the CCP instance 776A includes a network controller instance 778.
  • the network controller instance 778 includes a centralized reachability and forwarding information module instance 779 (which is a middleware layer providing the context of the network controller 678 to the operating system and communicating with the various NEs), and an CCP application layer 780 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces).
  • this CCP application layer 780 within the centralized control plane 676 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.
  • the CCP application layer 780 includes an VNF PLCE instance 782.
  • the centralized control plane 676 transmits relevant messages to the data plane 680 based on CCP application layer 780 calculations and middleware layer mapping for each flow.
  • a flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers.
  • Different NDs/NEs/VNEs of the data plane 680 may receive different messages, and thus different forwarding information.
  • the data plane 680 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometimes referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
  • forwarding tables sometimes referred to as flow tables
  • Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets.
  • the model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
  • MAC media access control
  • Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many per a defined scheme (e.g., selecting a first forwarding table entry that is matched).
  • Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet.
  • TCP transmission control protocol
  • a network interface may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI.
  • a virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface).
  • a NI physical or virtual
  • a loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address.
  • IP addresses of that ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
  • Next hop selection by the routing system for a given destination may resolve to one path (that is, a routing protocol may generate one next hop on a shortest path); but if the routing system determines there are multiple viable next hops (that is, the routing protocol generated forwarding solution offers more than one next hop on a shortest path - multiple equal cost next hops), some additional criteria is used - for instance, in a connectionless network, Equal Cost Multi Path (ECMP) (also known as Equal Cost Multi Pathing, multipath forwarding and IP multipath) may be used (e.g., typical implementations use as the criteria particular header fields to ensure that the packets of a particular packet flow are always forwarded on the same next hop to preserve packet flow ordering).
  • ECMP Equal Cost Multi Path
  • a packet flow is defined as a set of packets that share an ordering constraint.
  • the set of packets in a particular TCP transfer sequence needs to arrive in order, or else the TCP logic will interpret the out of order delivery as congestion and slow the TCP transfer rate down.
  • a Layer 3 (L3) Link Aggregation (LAG) link is a link directly connecting two NDs with multiple IP-addressed link paths (each link path is assigned a different IP address), and a load distribution decision across these different link paths is performed at the ND forwarding plane; in which case, a load distribution decision is made between the link paths.
  • L3 Link Aggregation (LAG) link is a link directly connecting two NDs with multiple IP-addressed link paths (each link path is assigned a different IP address), and a load distribution decision across these different link paths is performed at the ND forwarding plane; in which case, a load distribution decision is made between the link paths.
  • Certain NDs internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits.
  • CPE customer premise equipment
  • a subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session.
  • a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly deallocates that subscriber circuit when that subscriber disconnects.
  • Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or
  • Asynchronous Transfer Mode (ATM)
  • Ethernet 802.1Q Virtual LAN (VLAN), Internet Protocol, or ATM
  • ATM Asynchronous Transfer Mode
  • a subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking).
  • DHCP dynamic host configuration protocol
  • CLIPS client-less internet protocol service
  • MAC Media Access Control
  • PPP point-to-point protocol
  • DSL digital subscriber line
  • DSL digital subscriber line
  • DHCP When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided.
  • CPE end user device
  • a virtual circuit synonymous with virtual connection and virtual channel, is a connection oriented communication service that is delivered by means of packet mode communication.
  • Virtual circuit communication resembles circuit switching, since both are connection oriented, meaning that in both cases data is delivered in correct order, and signaling overhead is required during a connection establishment phase.
  • Virtual circuits may exist at different layers. For example, at layer 4, a connection oriented transport layer datalink protocol such as Transmission Control Protocol (TCP) may rely on a connectionless packet switching network layer protocol such as IP, where different packets may be routed over different paths, and thus be delivered out of order.
  • TCP Transmission Control Protocol
  • IP connectionless packet switching network layer protocol
  • the virtual circuit is identified by the source and destination network socket address pair, i.e. the sender and receiver IP address and port number.
  • TCP includes segment numbering and reordering on the receiver side to prevent out-of-order delivery.
  • Virtual circuits are also possible at Layer 3 (network layer) and Layer 2 (datalink layer); such virtual circuit protocols are based on connection oriented packet switching, meaning that data is always delivered along the same network path, i.e. through the same NEs/VNEs.
  • the packets are not routed individually and complete addressing information is not provided in the header of each data packet; only a small virtual channel identifier (VCI) is required in each packet; and routing information is transferred to the NEs/VNEs during the connection establishment phase;
  • VCI virtual channel identifier
  • VCI virtual channel identifier
  • VCI virtual channel identifier
  • VCI virtual channel identifier
  • ATM Asynchronous Transfer Mode
  • VCI virtual path identifier
  • VCI virtual channel identifier
  • GPRS General Packet Radio Service
  • MPLS Multiprotocol label switching
  • Certain NDs use a hierarchy of circuits.
  • the leaf nodes of the hierarchy of circuits are subscriber circuits.
  • the subscriber circuits have parent circuits in the hierarchy that typically represent aggregations of multiple subscriber circuits, and thus the network segments and elements used to provide access network connectivity of those end user devices to the ND.
  • These parent circuits may represent physical or logical aggregations of subscriber circuits (e.g., a virtual local area network (VLAN), a permanent virtual circuit (PVC) (e.g., for Asynchronous Transfer Mode (ATM)), a circuit-group, a channel, a pseudo-wire, a physical NI of the ND, and a link aggregation group).
  • VLAN virtual local area network
  • PVC permanent virtual circuit
  • ATM Asynchronous Transfer Mode
  • a circuit-group is a virtual construct that allows various sets of circuits to be grouped together for configuration purposes, for example aggregate rate control.
  • a pseudo-wire is an emulation of a layer 2 point-to-point connection- oriented service.
  • a link aggregation group is a virtual construct that merges multiple physical NIs for purposes of bandwidth aggregation and redundancy.
  • the parent circuits physically or logically encapsulate the subscriber circuits.
  • Each VNE e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable.
  • each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s).
  • AAA authentication, authorization, and accounting
  • Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
  • interfaces that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing).
  • the subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND.
  • a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context's interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher-layer protocol interface is configured and associated with that physical entity.
  • a physical entity e.g., physical NI, channel
  • a logical entity e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)
  • network protocols e.g., routing protocols, bridging protocols

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Embodiments of the invention include methods of packet processing in a distributed virtual network function (VNF). In one embodiment, a packet processing rule is installed for a first processing unit based on a determination that a VNF is implemented over a plurality of processing units including the first processing unit and a second processing unit. The packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow. A packet in the traffic flow is processed by the first processing unit based on the packet processing rule, and the packet is sent out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.

Description

METHOD AND SYSTEM FOR PACKET PROCESSING OF A DISTRIBUTED VIRTUAL NETWORK FUNCTION (VNF)
TECHNICAL FIELD
[0001] Embodiments of the invention relate to the field of networking; and more specifically, relate to a method and system for packet processing of a virtual network function.
BACKGROUND
[0002] The recent advances in software engineering and high-performance commodity servers facilitate virtualization of network functions (NFs). NFs traditionally delivered on proprietary and application-specific equipment now can be realized in software running on generic server hardware (e.g., commercial off-the-shelf (COTS) servers). The technology, using one or more virtual network functions (VNFs) and referred to as network function virtualization (NFV), is gaining popularity with network operators.
[0003] A VNF may be implemented at various parts of a network, such as at a serving gateway (S-GW), a packet data network gateway (P-GW), a serving GPRS (general packet radio service) support node (SGSN), a gateway GPRS support node (GGSN), a broadband remote access server (BRAS), and a provider edge (PE) router. VNFs can also be implemented to support various services (also called appliances, middleboxes, or service functions), such as content filter, deep packet inspection (DPI), logging/metering/charging/advanced charging, firewall (FW), virus scanning (VS), intrusion detection and prevention (IDP), and network address translation (NAT), and so forth. The flexibility offered by VNFs allows more dynamic deployments of traditional network functions, in various locations such as the operator's cloud or even central offices and point of presences (POPs) where a smaller scale data center may reside.
[0004] A VNF may be implemented over a plurality of processing units, and such VNF may be referred to as a distributed VNF. With a distributed VNF processing packets using a plurality of processing units, efficient load balancing among the plurality of processing units is advantageous.
SUMMARY
[0005] Embodiments of the invention include methods of packet processing in a distributed virtual network function (VNF). In one embodiment, a packet processing rule is installed for a first processing unit based on a determination that a VNF is implemented over a plurality of processing units including the first processing unit and a second processing unit. The packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow. A packet in the traffic flow is processed by the first processing unit based on the packet processing rule, and the packet is sent out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
[0006] Embodiments of the invention include electronic devices for packet processing in a distributed virtual network function (VNF). In one embodiment, an electronic device includes non-transitory machine-readable storage medium to store instructions and a processor coupled to the non-transitory machine -readable storage medium to process the stored instructions to perform operations. The operations include installing a packet processing rule for a first processing unit based on a determination that a VNF is implemented over a plurality of processing units including the first processing unit and a second processing unit. The packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow. The operations include processing a packet in the traffic flow by the first processing unit based on the packet processing rule, and sending the packet out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
[0007] Embodiments of the invention include non-transitory machine-readable storage media for packet processing in a distributed virtual network function (VNF). In one embodiment, a machine -readable storage provides instructions, which when executed by a processor of an electronic device, cause the processor to perform operations. The operations include installing a packet processing rule for a first processing unit based on a determination that a VNF is implemented over a plurality of processing units including the first processing unit and a second processing unit. The packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow. The operations include processing a packet in the traffic flow by the first processing unit based on the packet processing rule, and sending the packet out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
[0008] Embodiments of the invention offers efficient ways to process packets among a plurality of processing units of a virtual network function (VNF). BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The invention may best be understood by referring to the following description and accompanying drawings that illustrate embodiments of the invention. In the drawings:
[0010] Figure 1A illustrates a network using virtual network function (VNF) per one embodiment of the invention.
[0011] Figure IB illustrates a distributed VNF in a network per one embodiment of the invention.
[0012] Figure 2A illustrates load-balancing between two processing units within a distributed VNF per one embodiment of the invention.
[0013] Figure 2B illustrates a distributed VNF favoring local packet processing per one embodiment of the invention.
[0014] Figure 3 illustrates routing weights/costs for a distributed VNF per one embodiment of the invention.
[0015] Figure 4 illustrates LAG link weights/costs for a distributed VNF per one embodiment of the invention.
[0016] Figure 5 is a flow diagram illustrating weighted load-balancing among a plurality of processing units per one embodiment of the invention.
[0017] Figure 6A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, per some embodiments of the invention.
[0018] Figure 6B illustrates an exemplary way to implement a special-purpose network device per some embodiments of the invention.
[0019] Figure 6C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled per some embodiments of the invention.
[0020] Figure 6D illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), per some embodiments of the invention.
[0021] Figure 6E illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), per some embodiments of the invention.
[0022] Figure 6F illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, per some embodiments of the invention.
[0023] Figure 7 illustrates a general-purpose control plane device with centralized control plane (CCP) software, per some embodiments of the invention.
DETAILED DESCRIPTION
[0024] The following description describes methods and apparatus for packet processing of a distributed virtual network function (VNF). In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth to provide a more thorough understanding of the present invention. One skilled in the art will appreciate, however, that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement proper functionality without undue experimentation.
[0025] Bracketed text and blocks with dashed borders (such as large dashes, small dashes, dot- dash, and dots) may be used to illustrate optional operations that add additional features to the embodiments of the invention. Such notation, however, should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in some embodiments of the invention.
Terms
[0026] References in the specification to "one embodiment," "an embodiment," "an example embodiment," and so forth, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
[0027] The following description and claims may use the terms "coupled" and "connected," along with their derivatives. These terms are not intended as synonyms for each other.
"Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled with each other.
[0028] An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors (e.g., of which a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed). When the electronic device is turned on, that part of the code that is to be executed by the processor(s) of the electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of the electronic device. Typical electronic devices also include a set of one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. For example, the set of physical NIs (or the set of physical NI(s) in combination with the set of processors executing code) may perform any formatting, coding, or translating to allow the electronic device to send and receive data whether over a wired and/or a wireless connection. In some embodiments, a physical NI may comprise radio circuitry capable of (1) receiving data from other electronic devices over a wireless connection and/or (2) sending data out to other devices through a wireless connection. This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication. The radio circuitry may convert digital data into a radio signal having the proper parameters (e.g., frequency, timing, channel, bandwidth, and so forth). The radio signal may then be transmitted through antennas to the appropriate recipient(s). In some embodiments, the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter. The NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate with wire through plugging in a cable to a physical port connected to a NIC. One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
[0029] A network device (ND) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices). Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video). As explained in more details below relating to Figures 6-7, a network device may implement a set of network elements in some embodiments; and in alternative embodiments, a single network element may be implemented by a set of network devices.
Network Implementing Distributed VNFs
[0030] In a packet network, packets may be forwarded within traffic flows (or simply referred to as flows), and a network element may forward the flows based on the network element's forwarding tables (e.g., routing tables or switch tables), which may be managed by one or more controllers. In a software-defined networking (SDN) system, the controllers may be referred to as network controllers or SDN controllers (the two terms are used interchangeably in the specification). A flow may be defined as a set of packets whose headers match a given pattern of bits. A flow may be identified by a set of attributes embedded to one or more packets of the flow. An exemplary set of attributes includes one or more values in a 5-tuple (source and destination IP addresses, a protocol type, source and destination TCP/UDP ports). Another exemplary set of attributes, alternatively or additionally, includes International Standards Organization (OSI) layer 2 frame header information such as source/destination media access control (MAC) addresses and virtual local area network (VLAN) tag (e.g., IEEE 802.1Q tag).
[0031] Service chaining in a packet network is a way to stitch multiple customer specific services, and to lead a traffic flow through the right path (a service chain) in a packet network. Figure 1A illustrates a network using virtual network functions (VNFs) per one embodiment of the invention. A network 100 includes a controller 120 managing a plurality of network elements, including network elements 140 and 146. The network 100 may implement SDN architecture, in which case the controller 120 may be a SDN controller, and these network elements 140 and 146 may be implemented as OpenFlow switches when they comply with OpenFlow standards proposed by the Open Networking Foundation. Each of the controller 120, and the network elements 140 and 146 is implemented in one or more network devices in one embodiment.
[0032] The network elements 140 and 146 may communicate through a network cloud 190, which may contain traditional network elements such as routers/switches or other SDN network elements. The network elements 140 and 146, and the traditional network elements such as routers/switches or other SDN network elements in the network cloud 190, may host virtual network functions (VNFs) such as VNF1 142 and VNF2 144. The VNFs process subscribers' traffic by providing services such as content filter, deep packet inspection (DPI),
logging/metering/charging/advanced charging, firewall (FW), virus scanning (VS), intrusion detection and prevention (IDP), network address translation (NAT), and so forth. Each of these VNFs may be associated with one or more network elements (e.g., residing in or being coupled to the network elements) in the network cloud 190. Also, network elements 140 and 146 may host one or more of these or other VNFs.
[0033] In service chaining (also referred to as service function chaining (SFC)), incoming packets may be forwarded based on the header fields of packets or the header fields of frames that encapsulate the packets. The packets may be sent to one or more VNFs associated with a first network element. After the one or more VNFs associated with the first network element process the packets, the first network element forwards the packets to a second network element for later VNFs in the service chain until the packets are processed by all VNFs of the service chain.
[0034] A network element may instantiate a VNF with the goal of optimizing resource allocation in the network 100 for the service chain. Through the coordination of a controller such as the controller 120, a network such as the network 100 may allocate VNFs to various network elements such as SDN network elements and traditional network elements.
[0035] Packet forwarding along with a service chain in a network may be performed in OSI Layer 2 or Layer 3. Packet forwarding in Layer 3 may be based on equal-cost multi-path (ECMP) routing. In ECMP routing, a packet may be forwarded from a source VNF and to a destination VNF over one of multiple equal cost paths. The ECMP routing may be determined based on the network elements that are associated with the source VNF and the destination VNF. The ECMP routing implemented per one embodiment of the invention may comply with one or more of Internet Engineering Task Force (IETF) Request for Comments (RFC) 2991, entitled "Multipath Issues in Unicast and Multicast Next-Hop Selection," RFC 2992, entitled "Analysis of an Equal-Cost Multi-Path Algorithm," and other industrial standards. Each hop in the ECMP routing may be a network element that is associated with a VNF. Routing of a service chain of VNF1 - VNF2 - VNF3, for example, becomes ECMP routing between VNF1 and VNF2, and between VNF2 and VNF3. In this scenario, the network elements hosting VNFl, VNF2, and VNF3 are three hops in the ECMP routing, and each VNF represents one hop.
[0036] Packet forwarding in Layer 2 may be based on link aggregation group (LAG). A LAG aggregates multiple links to increase throughput beyond what a single link could support, and a LAG may be set up between a source VNF and a destination VNF where the LAG is aggregated from multiple links between the network element hosting the source VNF and the network element hosting the destination VNF. In Layer 2, packets are encapsulated in Layer 2 frames (e.g., Ethernet frames) in forwarding, thus packet forwarding in Layer 2 may be also referred to as frame forwarding. A LAG may also apply at OSI Layer 3, and packet forwarding in a Layer 3 LAG may be based on Internet Protocol (IP) addresses of links. Embodiments of the invention may apply to both Layer 2 and Layer 3 LAG packet forwarding.
[0037] The LAG forwarding implemented per one embodiment of the invention may comply with one or more of the Institute of Electrical and Electronics Engineers (IEEE) 802.1 AX, entitled "IEEE Standard for Local and Metropolitan Area Networks - Link Aggregation," IEEE 802. laq, entitled "IEEE Standard for Local and Metropolitan Area Networks - Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks," and other industrial standards. Each hop in the LAG forwarding may be a network element that is associated with a VNF. Packet forwarding of a service chain of VNFl - VNF2 - VNF3, becomes LAG forwarding between VNFl and VNF2, and between VNF2 and VNF3. In this scenario, the network elements hosting VNFl, VNF2, and VNF3 are three hops in the LAG forwarding, and each VNF represents one hop.
[0038] A VNF may be implemented over a plurality of processing units, and such VNF may be referred to as a distributed VNF. For a distributed VNF, each VNF representing one hop in packet forwarding has its limitation. Figure IB illustrates a zoom-in view of the VNFl of Figure 1A per one embodiment of the invention. In this embodiment, the VNFl 142 is implemented over processing units 160 and 170.
[0039] Each of the processing units 160 and 170 may be a processor, a processor core, a virtual machine (VM), a software container (or simply container), or a unikernel. Processor and processor core are well-known in the art; VM, software container, and unikernel are explained in more details relating to the general-purpose network device 604 in Figure 6A. In one embodiment, each processing unit is implemented in a network device, and the VNFl 142 is implemented through multiple network devices that are included in a virtual network element such as the VNE 670T discussed in Figure 6F. While VNFl 142 contains only two processing units, a distributed VNF may contain more than two processing units, and embodiments of the invention are not limited to a particular number of processing units implementing a distributed VNF.
[0040] Each processing unit receives a packet through one or more ingress points, and after completion of the processing at the processing unit, the packet is forwarded to one or more egress points. When a packet needs further processing by another processing unit, the packet is forwarded through one or more switch points. In Figure IB, the processing units 160 and 170 include ingress points 164 and 174, egress points 162 and 172, and switch points 166 and 176, respectively. While these ingress points, egress points, and switch points are shown within the processing units 160 and 170, one or more of these ingress points, egress points, and switch points may be outside the processing units 160 and 170.
[0041] The ingress points, egress points, and switch points have different embodiments in different scenarios. For example, in a Layer 2 network, the ingress points, egress points, and switch points may be referred to as the ingress ports, egress ports, and fabric ports, respectively; and in a Layer 3 network, the ingress points, egress points, and switch points may be referred to as the ingress interfaces (IFs), egress interfaces, and fabric interfaces, respectively. The ingress points, egress points, and switch points may be abstraction of physical components of a network device, thus these points may be virtual, and referred to such as virtual ports and virtual interfaces. For example, each of the ingress ports, egress ports, and fabric ports may be implemented using a physical port of a network interface card (NIC) of a network device implementing the network element hosting the VNF1 142.
[0042] One or more links connect the switch points 166 and 176. The one or more links between the switch points 166 and 176 consume interconnect bandwidth between the processing units 160 and 170. Additionally, the switch points 166 and 176 may be implemented using physical ports of a network device as discussed above. Thus, the switch points 166 and 176, while allowing the processing units 160 and 170 to work together to process packets of a traffic flow, consume resources of one or more network devices implementing the network element hosting the VNF1 142.
[0043] When the processing units 160 and 170 finish processing packets, the packets are forwarded to a gateway 180 in one embodiment. The gateway may be a separate entity such as a processor, a processor core, a VM, a container, a unikernel, or a network device. The gateway is to forward the packets to the next destination, which may be the next VNF in a service chain. In one embodiment when the VNF1 142 is implemented in a datacenter, the gateway is referred to as a default gateway (e.g., a router/switch). The default gateway may accept packets from one VNF and distribute the packets to the next VNF. In an alternative embodiment, the VNF1 does not send packets to the gateway, and the packets are forwarded directly from the egress points 162 and 172 to the next destination.
[0044] The controller 120 may coordinate packet forwarding of the VNFl 142, and the controller 120 may be implemented in a network device having one or more control processing units, where each control processing unit may be one of a processor, a processor core, a VM, a container, a unikernel, and a network device. A VNF path/link computation element (PLCE) 124 may coordinate the packet forwarding of the VNFl 142. The PLCE 124 may be implemented within or coupled to the controller 120. The controller 120 communicates with the VNFl 142 through one or more communication links, and the communication is through a control plane between the controller 120 and the VNFl 142, in contrast to a forwarding plane (also referred to as data plane) including the VNFl 142 and one or more other VNFs, where packets are forwarded in between the VNFs to be executed. Figure 6D provides more details regarding the control plane and the forwarding/data plane.
[0045] Representing a distributed VNF such as the VNFl 142 as one hop in packet forwarding has its limitation. For example, the single hop representation of a distributed VNF masks the distributed packet processing in multiple processing units. When the VNFl 142 is considered as a single hop, the difference between packets being forwarded through the egress points 162 and 172 is masked, since either egress point may be used to exit the distributed VNFl 142. Such abstraction may not sufficiently reflect the impact of load-balancing within a distributed VNF.
[0046] Figure 2A illustrates load-balancing between two processing units within a distributed VNF per one embodiment of the invention. Figure 2A is similar to Figure IB, and the same references indicate elements or components having the same or similar functionalities. Figure 2A illustrates execution units 168 and 178 in the processing units 160 and 170 respectively. The execution units 168 and 178 may process packets (e.g., modifying packet headers/bodies, encapsulating the packets, duplicating the packets, etc.) before forwarding the packets to the egress points. Each of the execution units may be implemented in one or more circuits.
[0047] Within a distributed VNF, to better utilize the multiple processing units of the distributed VNF, packet processing may be load-balanced among the multiple processing units, so that packet processing workload may be distributed more evenly across the multiple processing units. For example, the load-balancing may be based on the packet header (or frame header). A cryptographic hash function may be applied to the packet headers of the incoming packets, so that the incoming packets may be distributed among the multiple processing units.
[0048] The highlighted path in Figure 2A illustrates a traffic flow that is sourced from the processing unit 160 and load balanced through the processing unit 170. For an evenly distributed load-balancing between the processing units 160 and 170, roughly half of the traffic flow is processed through the execution unit 168 of the processing unit 160, and the other half through the execution unit 178 of the processing unit 170.
[0049] The load balancing based execution within the distributed VNF1 142, as illustrated at reference 202, appears to be efficient when the distributed VNF1 142 is considered as a single hop in packet forwarding of a traffic flow. From outside of the VNF1 142, processing of the packets through the execution units 168 or 178 appears to be the same, regardless of packets being received at the processing units 160 or 170 for the VNF1 142. Yet for packets received at the processing unit 160, the load-balancing of packets may cause the packets to be forwarded to another processing unit (such as the processing unit 170) different from the one at which the packets are received. For packets entering VNF1 142 through the processing unit 160, we refer to the processing unit 160 as the local processing unit and the processing unit 170 as the remote processing unit. The local processing unit of a distributed VNF for a traffic flow or packets of the traffic flow is the processing unit through which the traffic flow or the packets gain access to the distributed VNF. Thus, the local processing unit may be the very first processing unit that the traffic flow or the packets encounters within multiple processing units of the distributed VNF. The distributed packet processing, using the remote processing unit, the switch points, and interconnect bandwidth of the connecting link, consumes more resources comparing to local packet processing, which uses only the local processing unit. In this example, local packet processing (packets being processed by the local processing unit) causes packets to be forwarded through the ingress point 164, the execution unit 168, the egress point 162, and the gateway 180; and distributed packet processing (packet being processed by a remote processing unit), as highlighted, through the ingress point 164, the switch points 166 and 176 through a connecting link, the execution unit 178, the egress point 172, and the gateway 180. While both the local and distributed packet processing use ingress points, execution units, and egress points, the latter also uses switch points and interconnect bandwidth of the connecting link.
[0050] Distributed packet processing may be less than optimal for several reasons. A subscriber (also referred to as a tenant or a customer) of the network 100 may use a distributed VNF such as the VNF1 142 in the subscriber's service chain. The subscriber understands that packets are transmitted through ingress points such as the ingress points 164 and 174, thus the subscriber pays for the bandwidth through the ingress points. Yet in a distributed VNF, the load- balancing means that the distributed VNF also uses bandwidth through switch points such as the switch points 166 and 176. Each of the switch points consumes physical resources such as a physical network interface (NI) of a network device discussed above. The subscriber may be less understanding of the additional cost of paying for the switch points, and the cost for the switch points of a distributed VNF may be substantial. For example, a distributed VNF with N processing units may implement even distribution of workload through load-balancing, which causes only 1/N of the workload to be processed by the local processing unit, while (N-l)/N of the load is to be processed by (N-1) remote processing units. Since each switch point to a remote processing unit occupies physical resources including not only physical network interfaces (NIs), but also bandwidth associated with these physical network interfaces. The value proposition of using the distributed VNF deteriorates due to resource consumption related to switch points.
[0051] Additionally, the load-balancing through switch points to remote processing units may involve a longer path as the highlighted path in Figure 2A shows, and the longer path may impact the quality of service (QoS) of the packets being processed. For example, the longer path may cause additional packet delay, packet jitter, and/or more chances of packet drop.
[0052] Thus, in a distributed VNF, load-balancing among processing units of the distributed VNF may incur additional costs in physical resource consumption and/or impact on QoS of the packets being processed, and such additional costs may make a distributed VNF less efficient.
Distributed VNF Favoring Local Packet Processing
[0053] To make load-balancing in a distributed VNF more efficient, one may prefer packet processing by a local processing unit of the distributed VNF to packet processing by a remote processing unit of the distributed VNF. That is, a packet forwarding policy may favor, prefer, or prioritize (e.g., weight in favor) a local processing unit over a remote processing unit when processing packets of a traffic flow. In alternative or in addition, the packet forwarding policy may favor a particular remote processing unit over both the local processing unit and other remote processing unit(s) (e.g., the particular remote processing unit has execution resources that the local and other remote processing units do not).
[0054] Figure 2B illustrates a distributed VNF favoring local packet processing per one embodiment of the invention. Figure 2B is similar to Figure 2A, and the same references indicate elements or components having the same or similar functionalities. In Figure 2B, the packet forwarding policy favors a local processing unit, the processing unit 160 in this case. Thus, at reference 204, the weighted execution favors the local processing unit, and packet processing more likely takes the highlighted path in Figure 2B.
[0055] The weighted execution is in contrast to evenly distributed load balancing between the multiple processing units of a distributed VNF, and the local processing unit may carry more weight (or less cost), so that between a local processing unit and a remote processing unit, packet processing of a traffic flow in the distributed VNF1 142 is more likely to take the highlighted path in Figure 2B than the highlighted path in Figure 2A. [0056] The weighted execution of a distributed VNF may consider a remote processing unit as an internal hop, thus the distributed VNF is no longer viewed as a single hop in packet forwarding. The difference between packets being forwarded through the egress points 162 and 172 may be expressed in different routing weights/costs. Thus, the weighted execution of the distributed VNF better reflects the costs of distributed packet processing. For example, a distributed VNF with N processing units may have N-1 internal hops (when one traffic flow is to be received at the first processing unit and processed by the last processing unit in the VNF, and each processing unit has only one or two switch points), and the weighted execution may indicate the cost of load-balancing across all N processing units, such weighted execution reflects the cost of 2*(N-1) switch points passing the workload.
[0057] Since the weighted execution of the distributed VNF reflects the additional costs of remote packet processing in physical resource consumption and/or impact on QoS of the packets being processed, the weighted execution may utilize network resources consumed by a distributed VNF more efficiently than evenly distributed load-balancing between the multiple processing units of the distributed VNF.
[0058] The weighted execution of the distributed VNF may be advantageous when a particular remote processing unit is preferable over the local processing unit and other remote processing unit(s). For example, when the particular remote processing unit has execution resources that the local and other remote processing units do not (or more of the execution resources). The execution resources may be available bandwidth/time slot or particular execution capability (e.g., supporting out-of-order execution). In that case, a weighted execution will more likely forward the packets to the particular execution unit. The additional cost of such forwarding will be only one internal hop to the particular execution unit (instead of in the evenly distributed execution case where packets are also forwarded to other processing units, which then forwards to the particular execution unit).
[0059] The weighted execution of a distributed VNF may be implemented in a Layer 3 network using ECMP routing. A controller such as the controller 120 may run interior gateway protocol (IGP) and/or exterior gateway protocol (EGP) to compute ECMP routing for the processing units within a distributed VNF, now that the distributed VNF may be viewed as containing multiple hops. The computation of the ECMP routing may use routing tables. The routing tables may be computed by the controller and stored in the controller; the routing tables may be computed by the controller and installed in processing units of a distributed VNF such as the processing units 160 and 170; or the routing tables may be computed by the processing units themselves and installed in the processing units. The embodiments of the invention cover all different ways the routing tables are computed and installed. [0060] Figure 3 illustrates routing weights/costs for a distributed VNF per one embodiment of the invention. The routing weights/costs per processing unit 350 is for routing of the processing unit 160. The routing weights/costs may be maintained at a controller such as the controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; or it may be maintained at the network element hosting the VNF1 142. The weights/costs may be utilized when a routing protocol is to generate a routing table for the processing unit 160. Each processing unit may have a set of routing weights/costs corresponds to its Layer 3 routing.
[0061] For a traffic flow such as traffic flow 1, the processing unit 160 may process packets of the traffic flow through one route, e.g., packets of the traffic flow 1 being processed through the ingress point 164, the execution unit 168, the egress point 162, and the gateway 180, as illustrated as the highlighted path in Figure 2B. A weight/cost ofxl is assigned for the local route (indicated as a local interface). The remote routes for the traffic flows may go through different fabric interfaces (switch points) to remote processing units such as fabric interface 1 to the processing unit 170 at weight/cost x2 (the route is illustrated as the highlighted path in Figure 2A), and fabric interface 2 to a processing unit 190 (not shown) at weight/cost x3. Note if a remote processing unit (e.g., processing unit 190) is preferable for traffic flow 1 over local processing unit and other remote processing units, the weight/cost x3 will reflect the preference (e.g., having the highest weight or lowest cost among xl, x2, x3, and others). Also note that each fabric interface consumes physical resources such as physical network interfaces (NI) and bandwidth associated with the physical network interfaces as discussed above. Yet subscriber may not understand the additional cost of paying for the fabric interface; thus, in some embodiment, only one or a few remote fabric interfaces are provided for one processing unit.
[0062] The weights/costs xl, x2, and x3 are configured differently depending on the routine protocol being used. The weight/costs may be configured to favor local processing units over remote processing units or favor one remote processing unit over all other remote processing units/local processing unit. Among the remote processing units, the ones consuming less physical resources and/or having less impact on packet QoS may be favored over the ones consuming more physical resources and/or having less impact on packet QoS.
[0063] When the routing is computed for a traffic flow, the weights/costs of different paths through different interfaces are applied, thus the resulting routing table may cause packets to favor a local processing unit over a remote processing unit, so that load-balancing among the multiple processing units in a distributed VNF is no longer evenly distributed among the multiple processing units.
[0064] Each of the weights/costs in the routing weights/costs per processing unit 350 may be a value of a rational or irrational number. When weights are indicated, the weight corresponding to a local processing unit may be higher than the weight corresponding to a remote processing unit (e.g., the columns for the fabric interfaces indicating weights lower than that of the local route). In contrast, when costs are indicated, the opposite occurs, and the cost corresponding to a local processing unit may be lower than that of a remote processing unit. When a remote processing unit is favored over all other remote processing unit(s)/local processing unit, the weights/costs reflect such preference also.
[0065] The weight corresponding to a remote processing unit may be set to zero in one embodiment. This indicates that the corresponding remote processing unit is not to be used in normal operation. When the local processing unit becomes unavailable due to failure/congestion, however, packets of the traffic flow will be forwarded to the corresponding remote processing unit. The advantage of indicating a zero-weighted remote processing unit includes that the existing routing table may be used as it is when the local processing unit becomes unavailable, instead of waiting for the routing table to be updated, which may take time and resource to complete.
[0066] In ECMP routing, modified ECMP routes may be built for each processing unit of a distributed VNF based on the routing weights/costs for that processing unit and/or other processing units of the distributed VNF. The modified ECMP routes are different from original ECMP routes for the distributed VNF in that the original ECMP routes are computed without adding the routing weights/costs. The ECMP routes may be maintained at a controller such as the controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; they may be maintained at the network element hosting the VNF1 142; or they may be maintained at both the controller and at the network element hosting the VNF 1 142.
[0067] In one embodiment, the controller managing a processing unit of a distributed VNF provides both the original ECMP routes and the modified ECMP routes to the network element hosting the processing unit. The network element installs separate routing tables for the original ECMP routes and the modified ECMP routes, and the network element may determine which routing table to use for a given traffic flow at the processing unit.
[0068] The weighted execution of a distributed VNF may be implemented using LAG forwarding, in either Layer 2 or Layer 3. The link aggregation of links is based on the configuration of available links, including the links connecting ingress points, egress points, and the switch points. A controller such as the controller 120 or the one or more network elements hosting a distributed VNF may utilize LAG protocols such as Link Aggregation Control Protocol (LACP) or Distributed Relay Control Protocol (DRCP) to aggregate links to forward packets. [0069] As discussed above, the embodiments of the invention no longer consider a distributed VNF as a single hop, and remote processing units are considered internal hops within the distributed VNF. Additionally, embodiments of the invention differentiate the local processing unit (which processes packets entering a distributed VNF at the processing unit) and the remote processing unit (which processes packets entering a distributed VNF at a different processing unit). For LAG forwarding, the links local to a processing unit are referred to as local links; in contrast, the links connecting to a remote processing unit are referred to as remote links. For example, for the processing unit 160, links implementing the ingress point 164 and egress point 162 are local links and links implementing the switch point 166 are remote links.
[0070] In one embodiment, the network element hosting a distributed VNF may install different LAG tables for the different processing units of the distributed VNF. For example, in one embodiment, the LAG table for the processing unit 160 includes only local links of the processing unit 160, while the LAG table for the processing unit 170 includes only local links of the processing unit 170. When a processing unit of the distributed VNF contains no local links (, a LAG table including all remote links of the distributed VNF is installed for the processing unit.
[0071] In one embodiment, when a LAG implementation supports link weighing, weigh/costs are assigned to local and remote links of the processing units of a distributed VNF. Figure 4 illustrates LAG link weights/costs for a distributed VNF per one embodiment of the invention. The LAG link weights/costs per processing unit 450 is for LAG forwarding of the processing unit 160. The LAG link weights/costs may be maintained at a controller such as the controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; they may be maintained at the network element hosting the VNF1 142; or they may be maintained at both the controller and at the network element hosting the VNF 1 142. The weights/costs may be utilized to generate a LAG table for the processing unit 160. Each processing unit may have a set of routing weights/costs corresponds to its LAG forwarding.
[0072] For a traffic flow such as traffic flow 1, a weight/cost yl is assigned to processing unit 160's local links. For the same traffic flow, a weight/cost y2 is assigned to processing unit 160's remote links to connect to the remote processing unit 170; and a weight/cost y3 is assigned to processing unit 160's remote links to connect to the remote processing unit 190.
[0073] The weights/costs yl, y2, and y3 are configured differently depending on the LAG implementation. The weight/costs may be configured to favor local links over remote links, weight/costs may be also configured to favor particular remote links over other remote links/local links. Among the remote links, the ones consuming less physical resources and/or having less impact on packet QoS may be favored over the ones consuming more physical resources and/or having less impact on packet QoS in one embodiment. Each of the weights/costs in the LAG link weights/costs per processing unit 450 may be a value of a rational or irrational number. When weight is indicated, the weight corresponding to the local LAG links may be higher than the weights corresponding to remote LAG links. In contrast, when cost is indicated, the opposite occurs, and the cost corresponding to local LAG links may be lower than that of remote LAG links. When particular remote links are favored over all other remote links/local links, the weights/costs reflect such preference also.
[0074] The weight corresponding to remote LAG links may be set to zero in one embodiment. This indicates that the corresponding remote LAG links are not to be used in normal operation. When the local links become unavailable due to failure/congestion, however, packets of the traffic flow will be forwarded to the corresponding remote LAG links.
[0075] In one embodiment, modified LAG table may be built for each processing unit of a distributed VNF based on the routing weights/costs for that processing unit and/or other processing units of the distributed VNF. The modified ECMP routes are different from original ECMP routes for the distributed VNF in that the original ECMP routes are computed without adding the routing weights/costs. The ECMP routes may be maintained at a controller such as controller 120 that controls packet processing of the processing unit 160, for example, in the PLCE 124; they may be maintained at the network element hosting the VNF1 142; or they may be maintained at both the controller and at the network element hosting the VNF 1 142.
[0076] In one embodiment, the controller managing a processing unit of a distributed VNF provides both the original LAG table and the modified LAG table to the network element hosting the processing unit. The network element installs separate LAG tables for the original LAG forwarding and the modified LAG forwarding, and the network device may determine which LAG table to use for a given traffic flow at the processing unit.
[0077] Note that each set of remote links consumes physical resources such as physical network interfaces (NI) and bandwidth associated with the physical network interfaces as discussed above. Yet the subscriber may not understand the additional cost of paying for the remote links, thus, in some embodiment, only one or a few sets of remote links are provided for one processing unit. Also note in one embodiment, the weights/costs in Figures 3-4 are Boolean values, where the local interface and local LAG links have a value of true, and other interfaces and remote LAG links have a value of false, and based on the Boolean values, routing table and LAG table may be generated for weighted load-balancing among multiple processing units of a distributed VNF, similar to the weighted load-balancing performed when the weights/costs are rational or irrational values discussed above.
Flow Diagram [0078] The operations in the flow diagram will be described with reference to the exemplary embodiments of the other figures. However, the operations of the flow diagram can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagram.
[0079] Figure 5 is a flow diagram illustrating weighted load-balancing among a plurality of processing units per one embodiment of the invention. Method 500 may be performed in a controller such as the controller 120, or in the controller and a network element such as the one implementing the distributed VNF1 142. Both controller and network element may be implemented in one or more network devices as discussed above.
[0080] When the method 500 is performed in a network element hosting a distributed VNF, the network element optionally receives a message from a controller (referred to as a controlling network device when the controller is implemented in a network device) at reference 502. The message indicates a request to install a packet processing rule for a VNF. The VNF is a distributed VNF that is implemented over a plurality of processing units including a first processing unit and a second processing unit.
[0081] At reference 502, the packet processing rule is installed for the first processing unit based on a determination that the VNF is implemented over the plurality of processing units. The packet processing rule for the VNF prefers, favors, or prioritizes completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow. For the traffic flow, the first processing unit is the processing unit by which the traffic flow enters the VNF, thus the first processing unit is the local processing unit while the second processing unit is the remote processing unit. The packet processing rule favors the local processing unit over the remote processing unit in one embodiment. In that embodiment, the local processing unit is the designated processing unit by default. In an alternative embodiment, a remote processing unit is the designated processing unit for the traffic flow. The remote processing unit may include processing resources that the local processing unit and the other processing units do not include (or may have more processing resources than the local processing unit and the other processing units), thus the remote processing unit is preferable. The preference may be indicated through a weight/cost associated with the local/remote processing unit.
[0082] In one embodiment, the packet processing rule indicates a hop between the first processing unit and the second processing unit, wherein the hop represents a cost in processing packets. In one embodiment, the packet processing rule indicates weights/costs of completing packet processing of the traffic flow. For example, one weight/cost may indicate the weight/cost of completing packet processing of the traffic flow at the processing unit by which the packets of the traffic flow are received for the VNF (the local processing unit), and another weight/cost may indicate the weight/cost of completing packet processing of the traffic flow at a different processing unit (the remote processing unit) when the packets of the traffic flow are received from another processing unit of the VNF.
[0083] In one embodiment, the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 3 equal-cost multi-path (ECMP) routing is less costly than using the second processing unit in the ISO Layer 3 ECMP routing. For example, the packet processing rule may be an entry in a routing table for ECMP routing, and the entry in the routing table is generated based on the weights/costs associated with the local and remote processing units.
[0084] In one embodiment, a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 3 ECMP routing, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 3 ECMP routing. The numerical non-zero value indicates the first processing unit carrying more weight (or incurring less cost) in packet forwarding/routing than the second processing unit with the zero weight.
[0085] In one embodiment, the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 2 link aggregation group (LAG) is less costly than using the second processing unit in the ISO Layer 2 LAG. For example, the packet processing rule may be an entry in a LAG table for LAG forwarding, and the entry in the routing table is generated based on the weights/costs associated with the local and remote LAG links.
[0086] In one embodiment, a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 2 LAG, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 2 LAG. The numerical non-zero value indicates the first processing unit carrying more weight (or incurring less cost) in packet forwarding/routing than the second processing unit with the zero weight.
[0087] In one embodiment, the packet processing rule is installed at the network element such as the one implementing the distributed VNF1 142. In addition or in alternative, the packet processing rule is installed at the controller such as the controller 120.
[0088] At reference 506, the first processing unit processes a packet in the traffic flow based on the packet processing rule. The first processing unit identifies the packet as in the traffic flow based on the packet/frame header of the packet. Since the packet processing rule prefers the first processing unit to the second processing unit, the first processing unit does not forward the packet to the second processing unit for processing, but completes packet processing of the packet in the first processing unit instead. In this embodiment, the weight corresponding to the second processing unit may be configured to be zero.
[0089] In one embodiment, a high percentage (but not all) of packets of the traffic flow are processed by the first processing unit, and the rest is forwarded to the second processing unit to be processed. In the embodiment, the weight corresponding to the second processing unit may be configured to a value smaller than the weight corresponding to the first processing unit. Alternatively, a cost corresponding to the second processing unit may be configured to a value greater than the cost corresponding to the first processing unit.
[0090] At reference 508, the packet is sent out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit. The egress point is egress point 162, when the first processing unit is the processing unit 160. The egress point may be coupled to a gateway of the VNF such as the gateway 180. From the gateway 180, the packet is forwarded out of the VNF.
[0091] Each of the first and second processing unit is one of a processor, a processor core, a VM, a container, a unikernel, and a network device in one embodiment.
[0092] In one embodiment, when the second processing unit is the designated processing unit for the traffic flow, the packet in the traffic flow will be forwarded to the second processing unit based on the packet processing rule. Once the processing of the packet is completed in the second processing unit, the packet will be sent out of the egress point of the second processing unit.
[0093] Through embodiments of the invention, a distributed VNF is no longer viewed as a single hop in packet forwarding in one embodiment. The resource consumption and impact on QoS to forward packets to a remote processing unit may be reflected using the packet processing rule. The packet processing rule may apply weights/costs to generate packet forwarding tables such as the routing tables and LAG tables discussed above. Through favoring a local processing unit over a remote processing unit or favoring one particular remote processing unit over the local processing unit and other remote processing unit(s), load-balancing in a distributed VNF is more efficient in embodiments of the invention. In some embodiments of the invention, zero weight is applied to values corresponding to a remote processing unit. Such application causes a distributed VNF to process packets using a local processing unit during normal operations, but process packets using remote processing unit(s) when the local processing unit is unavailable. Thus, zero-weighting allows a distributed VNF to mitigate failure/congestion cases without updating forwarding tables.
Network Environment Utilizing Embodiments of the Invention [0094] Figure 6A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, per some embodiments of the invention. Figure 6A shows NDs 600A-H, and their connectivity by way of lines between 600A- 600B, 600B-600C, 600C-600D, 600D-600E, 600E-600F, 600F-600G, and 600A-600G, as well as between 600H and each of 600A, 600C, 600D, and 600G. These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link). An additional line extending from NDs 600A, 600E, and 600F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).
[0095] Two of the exemplary ND implementations in Figure 6 A are: 1) a special-purpose network device 602 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general-purpose network device 604 that uses common off-the-shelf (COTS) processors and a standard OS.
[0096] The special-purpose network device 602 includes networking hardware 610 comprising a set of one or more processor(s) or processor core(s) 612, forwarding resource(s) 614 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 616 (through which network connections are made, such as those shown by the connectivity between NDs 600A-H), as well as non-transitory machine -readable storage media 618 having stored therein networking software 620. During operation, the networking software 620 may be executed by the networking hardware 610 to instantiate a set of one or more networking software instance(s) 622. Each of the networking software instance(s) 622, and that part of the networking hardware 610 that executes that network software instance (be it hardware dedicated to that networking software instance and/or time slices of hardware temporally shared by that networking software instance with others of the networking software instance(s) 622), form a separate virtual network element 630A-R. Each of the virtual network element(s) (VNEs) 630A-R includes a control communication and configuration module 632A- R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 634A-R, such that a given virtual network element (e.g., 630A) includes the control communication and configuration module (e.g., 632A), a set of one or more forwarding table(s) (e.g., 634A), and that portion of the networking hardware 610 that executes the virtual network element (e.g., 630A). The networking software 620 includes one or more VNFs such as VNF1 142, and each network element may include a VNF instance such as VNF Instance (VI) 621A and VI 621R.
[0097] The special-purpose network device 602 is often physically and/or logically considered to include: 1) a ND control plane 624 (sometimes referred to as a control plane) comprising the processor(s) or processor core(s) 612 that execute the control communication and configuration module(s) 632A-R; and 2) a ND forwarding plane 626 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 614 that utilize the forwarding table(s) 634A-R and the physical NIs 616. By way of example, where the ND is a router (or is implementing routing functionality), the ND control plane 624 (the processor(s) or processor core(s) 612 executing the control communication and configuration module(s) 632A- R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 634A-R, and the ND forwarding plane 626 is responsible for receiving that data on the physical NIs 616 and forwarding that data out the appropriate ones of the physical NIs 616 based on the forwarding table(s) 634A-R.
[0098] Figure 6B illustrates an exemplary way to implement the special-purpose network device 602 per some embodiments of the invention. Figure 6B shows a special-purpose network device including cards 638 (typically hot pluggable). While in some embodiments the cards 638 are of two types (one or more that operate as the ND forwarding plane 626 (sometimes called line cards), and one or more that operate to implement the ND control plane 624 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card). A service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer- to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)). By way of example, a service card may be used to terminate IPsec tunnels and execute the attendant authentication and encryption algorithms. These cards are coupled together through one or more interconnect mechanisms illustrated as backplane 636 (e.g., a first full mesh coupling the line cards and a second full mesh coupling all of the cards).
[0099] Returning to Figure 6A, the general-purpose network device 604 includes hardware 640 comprising a set of one or more processor(s) 642 (which are often COTS processors or processor cores) and physical NIs 646, as well as non-transitory machine-readable storage media 648 having stored therein software 650. During operation, the processor(s) 642 execute the software 650 to instantiate one or more sets of one or more applications 664A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization. For example, in one such alternative embodiment the virtualization layer 654 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 662A-R called software containers that may each be used to execute one (or more) of the sets of applications 664A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is ran; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes. In another such alternative embodiment the virtualization layer 654 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 664A-R is run on top of a guest operating system within an instance 662A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes. In yet other alternative embodiments, one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application. As a unikernel can be implemented to run directly on hardware 640, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container, embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 654, unikernels running within software containers represented by instances 662A-R, or as a combination of unikernels and the above-described techniques (e.g. , unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).
[00100] The instantiation of the one or more sets of one or more applications 664A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 652. Each set of applications 664A-R, corresponding virtualization construct (e.g., instance 662A-R) if implemented, and that part of the hardware 640 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared), forms a separate virtual network element(s) 660A-R. The networking software 650 includes one or more VNFs such as VNF1 142, and each network element of network elements 660A-660R may include a VNF instance. [00101] The virtual network element(s) 660A-R perform similar functionality to the virtual network element(s) 630A-R - e.g., similar to the control communication and configuration module(s) 632A and forwarding table(s) 634A (this virtualization of the hardware 640 is sometimes referred to as network function virtualization (NFV)). Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in Data centers, NDs, and customer premise equipment (CPE). While embodiments of the invention are illustrated with each instance 662A-R corresponding to one VNE 660A-R, alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 662A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.
[00102] In certain embodiments, the virtualization layer 654 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 662A-R and the physical NI(s) 646, as well as optionally between the instances 662A-R; in addition, this virtual switch may enforce network isolation between the VNEs 660A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
[00103] The third exemplary ND implementation in Figure 6A is a hybrid network device 606, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND. In certain embodiments of such a hybrid network device, a platform VM (i.e., a VM that implements the functionality of the special-purpose network device 602) could provide for para-virtualization to the networking hardware present in the hybrid network device 606.
[00104] Regardless of the above exemplary implementations of an ND, when a single one of the multiple VNEs implemented by an ND is being considered (e.g., only one of the VNEs is part of a given virtual network) or where only a single VNE is currently being implemented by an ND, the shortened term network element (NE) is sometimes used to refer to that VNE. Also in all of the above exemplary implementations, each of the VNEs (e.g., VNE(s) 630A-R, VNEs 660A-R, and those in the hybrid network device 606) receives data on the physical NIs (e.g., 616, 646) and forwards that data out to the appropriate ones of the physical NIs (e.g., 616, 646). For example, a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port" and "destination port" refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services code point (DSCP) values.
[00105] Figure 6C illustrates various exemplary ways in which VNEs may be coupled per some embodiments of the invention. Figure 6C shows VNEs 670A.1-670A.P (and optionally VNEs 670A.Q-670A.R) implemented in ND 600A and VNE 670H.1 in ND 600H. In Figure 6C, VNEs 670A.1-P are separate from each other in the sense that they can receive packets from outside ND 600A and forward packets outside of ND 600A; VNE 670A.1 is coupled with VNE 670H.1, and thus they communicate packets between their respective NDs; VNE 670A.2- 670A.3 may optionally forward packets between themselves without forwarding them outside of the ND 600A; and VNE 670A.P may optionally be the first in a chain of VNEs that includes VNE 670A.Q followed by VNE 670A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services). While Figure 6C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different VNEs).
[00106] The NDs of Figure 6A, for example, may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services. Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g., username/password accessed webpages providing email services), and/or corporate networks over VPNs. For instance, end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers. However, through compute and storage virtualization, one or more of the electronic devices operating as the NDs in Figure 6A may also host one or more such servers (e.g., in the case of the general purpose-network device 604, one or more of the software instances 662A-R may operate as servers; the same would be true for the hybrid network device 606; in the case of the special-purpose network device 602, one or more such servers could also be run on a virtualization layer executed by the processor(s) or processor core(s) 612); in which case the servers are said to be co-located with the VNEs of that ND.
[00107] A virtual network is a logical abstraction of a physical network (such as that in Figure 6A) that provides network services (e.g., L2 and/or L3 services). A virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., layer 2 (L2, data link layer) and/or layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
[00108] A network virtualization edge (NVE) sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network. A virtual network instance (VNI) is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND). A virtual access point (VAP) is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
[00109] Examples of network services include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IP VPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)). Network services may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network - originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
[00110] Figure 6D illustrates a network with a single network element on each of the NDs of Figure 6A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), per some embodiments of the invention. Specifically, Figure 6D illustrates network elements (NEs) 670A-H with the same connectivity as the NDs 600A-H of Figure 6A.
[00111] Figure 6D illustrates that the distributed approach 672 distributes responsibility for generating the reachability and forwarding information across the NEs 670A-H; in other words, the process of neighbor discovery and topology discovery is distributed.
[00112] For example, where the special-purpose network device 602 is used, the control communication and configuration module(s) 632A-R of the ND control plane 624 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching
(GMPLS) Signaling RSVP-TE)) that communicate with other NEs to exchange routes, and then selects those routes based on one or more routing metrics. Thus, the NEs 670A-H (e.g., the processor(s) 612 executing the control communication and configuration module(s) 632A-R) perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by
distributively determining the reachability within the network and calculating their respective forwarding information. Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 624. The ND control plane 624 programs the ND forwarding plane 626 with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane 624 programs the adjacency and route information into one or more forwarding table(s) 634A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 626. For layer 2 forwarding, the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 602, the same distributed approach 672 can be implemented on the general-purpose network device 604 and the hybrid network device 606.
[00113] Figure 6D illustrates that a centralized approach 674 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination. The illustrated centralized approach 674 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 676 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized. The centralized control plane 676 has a south bound interface 682 with a data plane 680 (sometimes referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 670A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes). The centralized control plane 676 includes a network controller 678, which includes a centralized reachability and forwarding information module 679 that determines the reachability within the network and distributes the forwarding information to the NEs 670A-H of the data plane 680 over the south bound interface 682 (which may use the OpenFlow protocol). Thus, the network intelligence is centralized in the centralized control plane 676 executing on electronic devices that are typically separate from the NDs.
[00114] For example, where the special-purpose network device 602 is used in the data plane 680, each of the control communication and configuration module(s) 632A-R of the ND control plane 624 typically include a control agent that provides the VNE side of the south bound interface 682. In this case, the ND control plane 624 (the processor(s) 612 executing the control communication and configuration module(s) 632A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 676 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 679 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 632A-R, in addition to communicating with the centralized control plane 676, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 674, but may also be considered a hybrid approach). [00115] While the above example uses the special-purpose network device 602, the same centralized approach 674 can be implemented with the general-purpose network device 604 (e.g., each of the VNE 660A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 676 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 679; it should be understood that in some embodiments of the invention, the VNEs 660A-R, in addition to communicating with the centralized control plane 676, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 606. In fact, the use of SDN techniques can enhance the NFV techniques typically used in the general-purpose network device 604 or hybrid network device 606 implementations as NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run, and NFV and SDN both aim to make use of commodity server hardware and physical switches.
[00116] Figure 6D also shows that the centralized control plane 676 has a north bound interface 684 to an application layer 686, in which resides application(s) 688. The centralized control plane 676 has the ability to form virtual networks 692 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 670A-H of the data plane 680 being the underlay network)) for the application(s) 688. Thus, the centralized control plane 676 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).
[00117] While Figure 6D shows the distributed approach 672 separate from the centralized approach 674, the effort of network control may be distributed differently or the two combined in certain embodiments of the invention. For example: 1) embodiments may generally use the centralized approach (SDN) 674, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree. Such embodiments are generally considered to fall under the centralized approach 674, but may also be considered a hybrid approach. [00118] The VNF path/link computation element (PLCE) 124 discussed above may be a part of the network controller 678, which may perform functions similar to those of the controller 120 in one embodiment.
[00119] While Figure 6D illustrates the simple case where each of the NDs 600A-H implements a single NE 670A-H, the network control approaches described with reference to Figure 6D also work for networks where one or more of the NDs 600A-H implement multiple VNEs (e.g., VNEs 630A-R, VNEs 660A-R, those in the hybrid network device 606).
Alternatively or in addition, the network controller 678 may also emulate the implementation of multiple VNEs in a single ND. Specifically, instead of (or in addition to) implementing multiple VNEs in a single ND, the network controller 678 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 692 (all in the same one of the virtual network(s) 692, each in different ones of the virtual network(s) 692, or some combination). For example, the network controller 678 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 676 to present different VNEs in the virtual network(s) 692 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).
[00120] On the other hand, Figures 6E and 6F respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 678 may present as part of different ones of the virtual networks 692. Figure 6E illustrates the simple case of where each of the NDs 600A-H implements a single NE 670A-H (see Figure 6D), but the centralized control plane 676 has abstracted multiple of the NEs in different NDs (the NEs 670A-C and G-H) into (to represent) a single NE 6701 in one of the virtual network(s) 692 of Figure 6D, per some embodiments of the invention. Figure 6E shows that in this virtual network, the NE 6701 is coupled to NE 670D and 670F, which are both still coupled to NE 670E.
[00121] Figure 6F illustrates a case where multiple VNEs (VNE 670A.1 and VNE 670H.1) are implemented on different NDs (ND 600A and ND 600H) and are coupled to each other, and where the centralized control plane 676 has abstracted these multiple VNEs such that they appear as a single VNE 670T within one of the virtual networks 692 of Figure 6D, per some embodiments of the invention. Thus, the abstraction of a NE or VNE can span multiple NDs.
[00122] While some embodiments of the invention implement the centralized control plane 676 as a single entity (e.g., a single instance of software running on a single electronic device), alternative embodiments may spread the functionality across multiple entities for redundancy and/or scalability purposes (e.g., multiple instances of software running on different electronic devices). [00123] Similar to the network device implementations, the electronic device(s) running the centralized control plane 676, and thus the network controller 678 including the centralized reachability and forwarding information module 679, may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device). These electronic device(s) would similarly include processor(s), a set or one or more physical NIs, and a non-transitory machine-readable storage medium having stored thereon the centralized control plane software. For instance, Figure 7 illustrates, a general-purpose control plane device 704 including hardware 740 comprising a set of one or more processor(s) 742 (which are often COTS processors) and physical NIs 746, as well as non-transitory machine -readable storage media 748 having stored therein centralized control plane (CCP) software 750. In one embodiment, the CCP software 750 includes the VNF PLCE 124 discussed above.
[00124] In embodiments that use compute virtualization, the processor(s) 742 typically execute software to instantiate a virtualization layer 754 (e.g., in one embodiment the virtualization layer 754 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 762A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 754 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 762A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor; in another embodiment, an application is implemented as a unikernel, which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application, and the unikernel can run directly on hardware 740, directly on a hypervisor represented by virtualization layer 754 (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container represented by one of instances 762A-R). Again, in embodiments where compute virtualization is used, during operation an instance of the CCP software 750 (illustrated as CCP instance 776A) is executed (e.g., within the instance 762A) on the virtualization layer 754. In embodiments where compute virtualization is not used, the CCP instance 776A is executed, as a unikernel or on top of a host operating system, on the "bare metal" general-purpose control plane device 704. The instantiation of the CCP instance 776A, as well as the virtualization layer 754 and instances 762A-R if implemented, are collectively referred to as software instance(s) 752. [00125] In some embodiments, the CCP instance 776A includes a network controller instance 778. The network controller instance 778 includes a centralized reachability and forwarding information module instance 779 (which is a middleware layer providing the context of the network controller 678 to the operating system and communicating with the various NEs), and an CCP application layer 780 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces). At a more abstract level, this CCP application layer 780 within the centralized control plane 676 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view. In one embodiment, the CCP application layer 780 includes an VNF PLCE instance 782.
[00126] The centralized control plane 676 transmits relevant messages to the data plane 680 based on CCP application layer 780 calculations and middleware layer mapping for each flow. A flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers. Different NDs/NEs/VNEs of the data plane 680 may receive different messages, and thus different forwarding information. The data plane 680 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometimes referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.
[00127] Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets. The model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).
[00128] Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many per a defined scheme (e.g., selecting a first forwarding table entry that is matched). Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet. Thus, a forwarding table entry for IPv4/IPv6 packets with a particular transmission control protocol (TCP) destination port could contain an action specifying that these packets should be dropped.
[00129] Making forwarding decisions and performing actions occurs, based upon the forwarding table entry identified during packet classification, by executing the set of actions identified in the matched forwarding table entry on the packet.
[00130] However, when an unknown packet (for example, a "missed packet" or a "match- miss" as used in OpenFlow parlance) arrives at the data plane 680, the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 676. The centralized control plane 676 will then program forwarding table entries into the data plane 680 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 680 by the centralized control plane 676, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry.
[00131] A network interface (NI) may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI. A virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface). A NI (physical or virtual) may be numbered (a NI with an IP address) or unnumbered (a NI without an IP address). A loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address. The IP address(es) assigned to the NI(s) of a ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.
[00132] Next hop selection by the routing system for a given destination may resolve to one path (that is, a routing protocol may generate one next hop on a shortest path); but if the routing system determines there are multiple viable next hops (that is, the routing protocol generated forwarding solution offers more than one next hop on a shortest path - multiple equal cost next hops), some additional criteria is used - for instance, in a connectionless network, Equal Cost Multi Path (ECMP) (also known as Equal Cost Multi Pathing, multipath forwarding and IP multipath) may be used (e.g., typical implementations use as the criteria particular header fields to ensure that the packets of a particular packet flow are always forwarded on the same next hop to preserve packet flow ordering). For purposes of multipath forwarding, a packet flow is defined as a set of packets that share an ordering constraint. As an example, the set of packets in a particular TCP transfer sequence needs to arrive in order, or else the TCP logic will interpret the out of order delivery as congestion and slow the TCP transfer rate down.
[00133] A Layer 3 (L3) Link Aggregation (LAG) link is a link directly connecting two NDs with multiple IP-addressed link paths (each link path is assigned a different IP address), and a load distribution decision across these different link paths is performed at the ND forwarding plane; in which case, a load distribution decision is made between the link paths.
[00134] Certain NDs (e.g., certain edge NDs) internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits. A subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session. Thus, a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly deallocates that subscriber circuit when that subscriber disconnects. Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or
Asynchronous Transfer Mode (ATM)), Ethernet, 802.1Q Virtual LAN (VLAN), Internet Protocol, or ATM). A subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking). For example, the point-to-point protocol (PPP) is commonly used for digital subscriber line (DSL) services and requires installation of a PPP client that enables the subscriber to enter a username and a password, which in turn may be used to select a subscriber record. When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided. The use of DHCP and CLIPS on the ND captures the MAC addresses and uses these addresses to distinguish subscribers and access their subscriber records.
[00135] A virtual circuit (VC), synonymous with virtual connection and virtual channel, is a connection oriented communication service that is delivered by means of packet mode communication. Virtual circuit communication resembles circuit switching, since both are connection oriented, meaning that in both cases data is delivered in correct order, and signaling overhead is required during a connection establishment phase. Virtual circuits may exist at different layers. For example, at layer 4, a connection oriented transport layer datalink protocol such as Transmission Control Protocol (TCP) may rely on a connectionless packet switching network layer protocol such as IP, where different packets may be routed over different paths, and thus be delivered out of order. Where a reliable virtual circuit is established with TCP on top of the underlying unreliable and connectionless IP protocol, the virtual circuit is identified by the source and destination network socket address pair, i.e. the sender and receiver IP address and port number. However, a virtual circuit is possible since TCP includes segment numbering and reordering on the receiver side to prevent out-of-order delivery. Virtual circuits are also possible at Layer 3 (network layer) and Layer 2 (datalink layer); such virtual circuit protocols are based on connection oriented packet switching, meaning that data is always delivered along the same network path, i.e. through the same NEs/VNEs. In such protocols, the packets are not routed individually and complete addressing information is not provided in the header of each data packet; only a small virtual channel identifier (VCI) is required in each packet; and routing information is transferred to the NEs/VNEs during the connection establishment phase;
switching only involves looking up the virtual channel identifier in a table rather than analyzing a complete address. Examples of network layer and datalink layer virtual circuit protocols, where data always is delivered over the same path: X.25, where the VC is identified by a virtual channel identifier (VCI); Frame relay, where the VC is identified by a VCI; Asynchronous Transfer Mode (ATM), where the circuit is identified by a virtual path identifier (VPI) and virtual channel identifier (VCI) pair; General Packet Radio Service (GPRS); and Multiprotocol label switching (MPLS), which can be used for IP over virtual circuits (Each circuit is identified by a label).
[00136] Certain NDs (e.g., certain edge NDs) use a hierarchy of circuits. The leaf nodes of the hierarchy of circuits are subscriber circuits. The subscriber circuits have parent circuits in the hierarchy that typically represent aggregations of multiple subscriber circuits, and thus the network segments and elements used to provide access network connectivity of those end user devices to the ND. These parent circuits may represent physical or logical aggregations of subscriber circuits (e.g., a virtual local area network (VLAN), a permanent virtual circuit (PVC) (e.g., for Asynchronous Transfer Mode (ATM)), a circuit-group, a channel, a pseudo-wire, a physical NI of the ND, and a link aggregation group). A circuit-group is a virtual construct that allows various sets of circuits to be grouped together for configuration purposes, for example aggregate rate control. A pseudo-wire is an emulation of a layer 2 point-to-point connection- oriented service. A link aggregation group is a virtual construct that merges multiple physical NIs for purposes of bandwidth aggregation and redundancy. Thus, the parent circuits physically or logically encapsulate the subscriber circuits.
[00137] Each VNE (e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable. For example, in the case of multiple virtual routers, each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s). Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.
[00138] Within certain NDs, "interfaces" that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing). The subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND. As used herein, a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context's interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher-layer protocol interface is configured and associated with that physical entity.
[00139] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.

Claims

CLAIMS What is claimed is:
1. A method implemented in an electronic device for packet processing, the method comprising: installing (504) a packet processing rule for a first processing unit based on a determination that a virtual network function (VNF) is implemented over a plurality of processing units including the first processing unit and a second processing unit, wherein the packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow;
processing (506) a packet in the traffic flow by the first processing unit based on the packet processing rule; and
sending (508) the packet out of an egress point of the first processing unit for the VNF
responsive to completion of the processing in the first processing unit.
2. The method of claim 1, wherein the packet processing rule indicates a hop between the first processing unit and the second processing unit, wherein the hop represents a cost in processing the packet.
3. The method of claim 1 or 2, wherein the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 3 equal-cost multi-path (ECMP) routing is less costly than using the second processing unit in the ISO Layer 3 ECMP routing.
4. The method of claim 3, wherein a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 3 ECMP routing, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 3 ECMP routing.
5. The method of claim 1 or 2, wherein the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 2 link aggregation group (LAG) is less costly than using the second processing unit in the ISO Layer 2 LAG.
6. The method of claim 5, wherein a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 2 LAG, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 2 LAG.
7. The method of claim 1 or 2, further comprising: receiving (502) a message from a controlling network device, the message indicating a request to install the packet processing rule for the VNF.
8. The method of claim 1 or 2, wherein the egress point is coupled to a gateway to forward the packet to another VNF.
9. The method of claim 1 or 2, wherein each of the first and second processing units is one of a processor, a processor core, a virtual machine (VM), a software container, and a unikernel.
10. An electronic device to process packets, the electronic device comprising:
a non-transitory machine-readable medium (620, 650) to store instructions; and
a processor (612, 642) coupled with the non-transitory machine readable medium (620, 650) to process the stored instructions to:
install a packet processing rule for a first processing unit based on a determination that a virtual network function (VNF) is implemented over a plurality of processing units including the first processing unit and a second processing unit, wherein the packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow,
process a packet in the traffic flow by the first processing unit based on the packet
processing rule, and
send the packet out of an egress point of the first processing unit for the VNF responsive to completion of the processing in the first processing unit.
11. The electronic device of claim 10, wherein the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 3 equal-cost multi- path (ECMP) routing is less costly than using the second processing unit in the ISO Layer 3 ECMP routing.
12. The electronic device of claim 11, wherein a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 3 ECMP routing, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 3 ECMP routing.
13. The electronic device of claim 10, wherein the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 2 link aggregation group (LAG) is less costly than using the second processing unit in the ISO Layer 2 LAG.
14. The electronic device of claim 13, wherein a numerical non-zero value represents a first weight of using the first processing unit in the ISO Layer 2 LAG, and a value of zero represents a second weight of using the second processing unit in the ISO Layer 2 LAG.
15. The electronic device of claim 10, wherein each of the first and second processing units is one of a processor, a processor core, a virtual machine (VM), a software container, and a unikernel.
16. A non-transitory machine-readable storage medium that provides instructions, which when executed by a processor of an electronic device, cause the processor to perform operations comprising:
installing (504) a packet processing rule for a first processing unit based on a determination that a virtual network function (VNF) is implemented over a plurality of processing units including the first processing unit and a second processing unit, wherein the packet processing rule for the VNF prefers completing packet processing of a traffic flow in the first processing unit over forwarding the traffic flow to the second processing unit when the first processing unit is a designated processing unit for the traffic flow;
processing (506) a packet in the traffic flow by the first processing unit based on the packet processing rule; and
sending (508) the packet out of an egress point of the first processing unit for the VNF
responsive to completion of the processing in the first processing unit.
17. The non-transitory machine -readable storage medium of claim 16, wherein the packet processing rule indicates a hop between the first processing unit and the second processing unit, wherein the hop represents a cost in processing the packet.
18. The non-transitory machine -readable storage medium of claim 16 or 17, wherein the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 3 equal-cost multi-path (ECMP) routing is less costly than using the second processing unit in the ISO Layer 3 ECMP routing.
19. The non-transitory machine -readable storage medium of claim 16 or 17, wherein the packet processing rule indicates that using the first processing unit in an International Standards Organization (ISO) Layer 2 link aggregation group (LAG) is less costly than using the second processing unit in the ISO Layer 2 LAG.
20. The non-transitory machine -readable storage medium of claim 16 or 17, wherein the egress point is coupled to a gateway to forward the packet to another VNF.
PCT/IB2017/053217 2017-05-31 2017-05-31 Method and system for packet processing of a distributed virtual network function (vnf) WO2018220426A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/053217 WO2018220426A1 (en) 2017-05-31 2017-05-31 Method and system for packet processing of a distributed virtual network function (vnf)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2017/053217 WO2018220426A1 (en) 2017-05-31 2017-05-31 Method and system for packet processing of a distributed virtual network function (vnf)

Publications (1)

Publication Number Publication Date
WO2018220426A1 true WO2018220426A1 (en) 2018-12-06

Family

ID=59062057

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2017/053217 WO2018220426A1 (en) 2017-05-31 2017-05-31 Method and system for packet processing of a distributed virtual network function (vnf)

Country Status (1)

Country Link
WO (1) WO2018220426A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11171863B2 (en) 2019-08-12 2021-11-09 Hewlett Packard Enterprise Development Lp System and method for lag performance improvements
US20220239690A1 (en) * 2021-01-27 2022-07-28 EMC IP Holding Company LLC Ai/ml approach for ddos prevention on 5g cbrs networks
US11494212B2 (en) * 2018-09-27 2022-11-08 Intel Corporation Technologies for adaptive platform resource assignment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171911A1 (en) * 2006-01-25 2007-07-26 Yoon-Jin Ku Routing system and method for managing rule entry thereof
US20070297400A1 (en) * 2006-06-26 2007-12-27 Allan Cameron Port redirector for network communication stack
US20100017846A1 (en) * 2007-03-23 2010-01-21 Huawei Technologies Co., Ltd. Service processing method and system, and policy control and charging rules function

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070171911A1 (en) * 2006-01-25 2007-07-26 Yoon-Jin Ku Routing system and method for managing rule entry thereof
US20070297400A1 (en) * 2006-06-26 2007-12-27 Allan Cameron Port redirector for network communication stack
US20100017846A1 (en) * 2007-03-23 2010-01-21 Huawei Technologies Co., Ltd. Service processing method and system, and policy control and charging rules function

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11494212B2 (en) * 2018-09-27 2022-11-08 Intel Corporation Technologies for adaptive platform resource assignment
US11171863B2 (en) 2019-08-12 2021-11-09 Hewlett Packard Enterprise Development Lp System and method for lag performance improvements
US20220239690A1 (en) * 2021-01-27 2022-07-28 EMC IP Holding Company LLC Ai/ml approach for ddos prevention on 5g cbrs networks
US12041077B2 (en) * 2021-01-27 2024-07-16 EMC IP Holding Company LLC Ai/ml approach for DDOS prevention on 5G CBRS networks

Similar Documents

Publication Publication Date Title
EP3632064B1 (en) Routing table selection in a policy based routing system
EP3417577B1 (en) Ospf extensions for flexible path stitching and selection for traffic transiting segment routing and mpls networks
EP3417578B1 (en) Is-is extensions for flexible path stitching and selection for traffic transiting segment routing and mpls networks
EP3692685B1 (en) Remotely controlling network slices in a network
JP6967521B2 (en) Techniques for revealing maximum node and / or link segment identifier depth using OSPF
CN109076018B (en) Method and equipment for realizing network element in segmented routing network by using IS-IS protocol
US20170070416A1 (en) Method and apparatus for modifying forwarding states in a network device of a software defined network
EP3580897B1 (en) Method and apparatus for dynamic service chaining with segment routing for bng
WO2019138415A1 (en) Mechanism for control message redirection for sdn control channel failures
WO2016103167A1 (en) Adaptive load balancing in packet processing
WO2016174597A1 (en) Service based intelligent packet-in mechanism for openflow switches
EP3935814B1 (en) Dynamic access network selection based on application orchestration information in an edge cloud system
EP3586482B1 (en) Mechanism to detect data plane loops in an openflow network
EP3738033A1 (en) Process placement in a cloud environment based on automatically optimized placement policies and process execution profiles
WO2017221050A1 (en) Efficient handling of multi-destination traffic in multi-homed ethernet virtual private networks (evpn)
WO2020255150A1 (en) Method and system to transmit broadcast, unknown unicast, or multicast (bum) traffic for multiple ethernet virtual private network (evpn) instances (evis)
WO2018065813A1 (en) Method and system for distribution of virtual layer 2 traffic towards multiple access network devices
WO2018220426A1 (en) Method and system for packet processing of a distributed virtual network function (vnf)
US11563648B2 (en) Virtual network function placement in a cloud environment based on historical placement decisions and corresponding performance indicators
WO2019229760A1 (en) Method and apparatus for optimized dissemination of layer 3 forwarding information in software defined networking (sdn) networks
WO2017187222A1 (en) Robust method of distributing packet-ins in a software defined networking (sdn) network
WO2024153327A1 (en) Improved intent requests and proposals using proposal times and accuracy levels

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17730280

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17730280

Country of ref document: EP

Kind code of ref document: A1