US11271775B2 - Technologies for hairpinning network traffic - Google Patents

Technologies for hairpinning network traffic Download PDF

Info

Publication number
US11271775B2
US11271775B2 US16/023,771 US201816023771A US11271775B2 US 11271775 B2 US11271775 B2 US 11271775B2 US 201816023771 A US201816023771 A US 201816023771A US 11271775 B2 US11271775 B2 US 11271775B2
Authority
US
United States
Prior art keywords
agent
network packet
vepa
virtual machine
received network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/023,771
Other languages
English (en)
Other versions
US20190052480A1 (en
Inventor
Donald Skidmore
Joshua Hay
Anjali Singhai Jain
Parthasarathy Sarangam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US16/023,771 priority Critical patent/US11271775B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JAIN, ANJALI SINGHAI, SARANGAM, PARTHASARATHY, SKIDMORE, DONALD, HAY, JOSHUA
Publication of US20190052480A1 publication Critical patent/US20190052480A1/en
Priority to EP19176608.8A priority patent/EP3588869B1/de
Priority to CN201910451623.3A priority patent/CN110661690A/zh
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNMENT AGREEMENT PREVIOUSLY RECORDED ON REEL 046834 FRAME 0431. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: JAIN, ANJALI SINGHAI, SARANGAM, PARTHASARATHY, SKIDMORE, DONALD, HAY, JOSHUA
Application granted granted Critical
Publication of US11271775B2 publication Critical patent/US11271775B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/4616LAN interconnection over a LAN backbone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • H04L12/4625Single bridge functionality, e.g. connection of two networks over a single bridge
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/65Re-configuration of fast packet switches

Definitions

  • the data networks typically include one or more network computing devices (e.g., compute servers, storage servers, etc.) to route communications (e.g., via switches, routers, etc.) that enter/exit a network (e.g., north-south network traffic) and between network computing devices in the network (e.g., east-west network traffic).
  • network traffic may be generated by the same computing device that is intended to receive the generated network traffic. Oftentimes, such conditions occur as a result of interactions between virtual switching environments in a hypervisor and the first layer of the physical switching infrastructure.
  • VEB virtual Ethernet bridge
  • MAC media access control
  • VEPA technology provides bridging support using an adjacent, external network switch, which requires network traffic to leave the source computing device, resulting in latency and wasted bandwidth.
  • network switches such as top of rack (ToR) switches which are commonly used in large cloud implementations.
  • ToR top of rack
  • FIG. 1 is a simplified block diagram of at least one embodiment of a system for hairpinning network traffic
  • FIG. 2 is a simplified block diagram of at least one embodiment of the network computing device of the system of FIG. 1 ;
  • FIG. 3 is a simplified block diagram of at least one embodiment illustrating the network computing device of the system of FIG. 1 hairpinning network traffic local to the network computing device of FIGS. 1 and 2 ;
  • FIG. 4 is a simplified communication flow diagram of at least one embodiment for hairpinning a network packet between a source virtual machine and a target virtual machine of the network computing device of FIGS. 1-3 .
  • references in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • items included in a list in the form of “at least one of A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
  • items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
  • the disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors.
  • a machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
  • a system 100 for hairpinning network traffic includes a switch 104 communicatively coupled to multiple network compute devices 106 (e.g., in a cloud environment, a data center, etc.).
  • the switch 104 is configured to receive network packets originating outside of the network 102 , commonly referred to as north-south network traffic.
  • the switch 104 is additionally configured to route traffic to other network compute devices 106 , which may be directly coupled to the switch 104 or indirectly coupled via another switch 104 , commonly referred to as east-west network traffic.
  • the network compute devices 106 include a first network compute device 106 designated as network compute device (1) 106 a , a second network compute device 106 designated as network compute device (2) 106 b , and a third network compute device 106 designated as network compute device (N) 106 c (e.g., in which the network compute device (N) 106 c represents the “Nth” network compute device 106 and “N” is a positive integer).
  • N network compute device
  • the switch 104 Upon receipt of a network packet, the switch 104 is configured to identify which network compute device 106 to forward the received network packet to.
  • the routing decision logic performed by the switch 104 may be based on one or more operations that are to be performed on the network packet (e.g., a data processing service) or are to be taken in response to having received the network packet (e.g., lookup data based on query parameters of the network packet, store data in a payload of the network packet, etc.).
  • the one or more operations may be carried out by multiple network compute devices 106 and/or multiple guests (e.g., guest operating systems) executing on one or more virtual machines (VMs) deployed on one of the network compute devices 106 .
  • guests e.g., guest operating systems
  • VMs virtual machines
  • the network compute device 106 is configured to route such network traffic internally, while still supporting offload functionality.
  • the network compute device 106 is configured to employ VEPA at the MAC layer.
  • the network compute device 106 is configured to deploy a hairpin agent (see, e.g., the virtual Ethernet bridge (VEB) hairpin agent 314 of FIG.
  • an accelerator device on an accelerator device (see, e.g., the accelerator device 310 of FIG. 3 ) to perform the network traffic hairpinning.
  • an adjacent, external network switch e.g., the switch 104
  • hairpin such network traffic
  • VEPA virtual Ethernet port aggregator
  • the network compute device 106 may be embodied as any type of computation or computer device capable of performing the functions described herein, including, without limitation, a server (including, e.g., stand-alone server, rack-mounted server, blade server, etc.), a sled (e.g., a compute sled, an accelerator sled, a storage sled, a memory sled, etc.), an enhanced network interface controller (NIC) (e.g., a host fabric interface (HFI)), a distributed computing system, or any other combination of compute/storage device(s) capable of performing the functions described herein.
  • a server including, e.g., stand-alone server, rack-mounted server, blade server, etc.
  • a sled e.g., a compute sled, an accelerator sled, a storage sled, a memory sled, etc.
  • NIC enhanced network interface controller
  • HFI host fabric interface
  • the illustrative network compute device 106 includes a compute engine 200 , an I/O subsystem 206 , one or more data storage devices 208 , communication circuitry 210 , and, in some embodiments, one or more peripheral devices 214 . It should be appreciated that the network compute device 106 may include other or additional components, such as those commonly found in a typical computing device (e.g., various input/output devices and/or other components), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component.
  • the compute engine 200 may be embodied as any type of device or collection of devices capable of performing the various compute functions as described herein.
  • the compute engine 200 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable-array (FPGA), a system-on-a-chip (SOC), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
  • the compute engine 200 may include, or may be embodied as, one or more processors 202 (i.e., one or more central processing units (CPUs)) and memory 204 .
  • processors 202 i.e., one or more central processing units (CPUs)
  • the processor(s) 202 may be embodied as any type of processor capable of performing the functions described herein.
  • the processor(s) 202 may be embodied as one or more single-core processors, one or more multi-core processors, a digital signal processor, a microcontroller, or other processor or processing/controlling circuit(s).
  • the processor(s) 202 may be embodied as, include, or otherwise be coupled to an FPGA (e.g., reconfigurable circuitry), an ASIC, reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
  • FPGA reconfigurable circuitry
  • ASIC reconfigurable hardware or hardware circuitry
  • the memory 204 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. It should be appreciated that the memory 204 may include main memory (i.e., a primary memory) and/or cache memory (i.e., memory that can be accessed more quickly than the main memory). Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM).
  • RAM random access memory
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • the compute engine 200 is communicatively coupled to other components of the network compute device 106 via the I/O subsystem 206 , which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 202 , the memory 204 , and other components of the network compute device 106 .
  • the I/O subsystem 206 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations.
  • the I/O subsystem 206 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor 202 , the memory 204 , and other components of the network compute device 106 , on a single integrated circuit chip.
  • SoC system-on-a-chip
  • the one or more data storage devices 208 may be embodied as any type of storage device(s) configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices.
  • Each data storage device 208 may include a system partition that stores data and firmware code for the data storage device 208 .
  • Each data storage device 208 may also include an operating system partition that stores data files and executables for an operating system.
  • the communication circuitry 210 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the network compute device 106 and the switch 104 , as well as any network communication enabling devices, such as an access point, router, etc., to allow communication to/from the network compute device 106 . Accordingly, the communication circuitry 210 may be configured to use any one or more communication technologies (e.g., wireless or wired communication technologies) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, LTE, 5G, etc.) to effect such communication.
  • technologies e.g., wireless or wired communication technologies
  • associated protocols e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, LTE, 5G, etc.
  • the communication circuitry 210 may include specialized circuitry, hardware, or combination thereof to perform pipeline logic (e.g., hardware-based algorithms) for performing the functions described herein, including processing network packets, making routing decisions, performing computational functions, etc.
  • pipeline logic e.g., hardware-based algorithms
  • performance of one or more of the functions of communication circuitry 210 as described herein may be performed by specialized circuitry, hardware, or combination thereof of the communication circuitry 210 , which may be embodied as a system-on-a-chip (SoC) or otherwise form a portion of a SoC of the network compute device 106 (e.g., incorporated on a single integrated circuit chip along with a processor 202 , the memory 204 , and/or other components of the network compute device 106 ).
  • the specialized circuitry, hardware, or combination thereof may be embodied as one or more discrete processing units of the network compute device 106 , each of which may be capable of performing one or more of the functions described herein.
  • the illustrative communication circuitry 210 includes a network interface controller (NIC) 212 , also commonly referred to as a host fabric interface (HFI) in some embodiments (e.g., high-performance computing (HPC) environments).
  • the NIC 212 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the network compute device 106 .
  • the NIC 212 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors.
  • SoC system-on-a-chip
  • the NIC 212 may include other components which are not shown, such as a local processor, an accelerator device (e.g., any type of specialized hardware on which operations can be performed faster and/or more efficiently than is possible on the local general-purpose processor), and/or a local memory local to the NIC 212 .
  • the local processor and/or the accelerator device of the NIC 212 may be capable of performing one or more of the functions described herein.
  • the one or more peripheral devices 214 may include any type of device that is usable to input information into the network compute device 106 and/or receive information from the network compute device 106 .
  • the peripheral devices 214 may be embodied as any auxiliary device usable to input information into the network compute device 106 , such as a keyboard, a mouse, a microphone, a barcode reader, an image scanner, etc., or output information from the network compute device 106 , such as a display, a speaker, graphics circuitry, a printer, a projector, etc. It should be appreciated that, in some embodiments, one or more of the peripheral devices 214 may function as both an input device and an output device (e.g., a touchscreen display, a digitizer on top of a display screen, etc.).
  • an output device e.g., a touchscreen display, a digitizer on top of a display screen, etc.
  • peripheral devices 214 connected to the network compute device 106 may depend on, for example, the type and/or intended use of the network compute device 106 . Additionally or alternatively, in some embodiments, the peripheral devices 214 may include one or more ports, such as a USB port, for example, for connecting external peripheral devices to the network compute device 106 .
  • the switch 104 may be embodied as any type of switch, such as a disaggregated switch, a rack-mounted switch, a standalone switch, a fully managed switch, a partially managed switch, a full-duplex switch, and/or a half-duplex communication mode enabled switch.
  • the switch 104 may be positioned as a top-of-rack (ToR) switch, an end-or-rack (EoR) switch, a middle-of-rack (MoR) switch, or any position in which the switch 104 may be configured to perform the functions described herein.
  • the switch 104 may be configured as a managed smart switch that includes a set of management features, such as may be required for the switch 104 to perform the functions as described herein.
  • the network 102 may be embodied as any type of wired or wireless communication network, including but not limited to a wireless local area network (WLAN), a wireless personal area network (WPAN), an edge network (e.g., a multi-access edge computing network), a fog network, a cellular network (e.g., Global System for Mobile Communications (GSM), Long-Term Evolution (LTE), 5G, etc.), a telephony network, a digital subscriber line (DSL) network, a cable network, a local area network (LAN), a wide area network (WAN), a global network (e.g., the Internet), or any combination thereof.
  • WLAN wireless local area network
  • WPAN wireless personal area network
  • an edge network e.g., a multi-access edge computing network
  • a fog network e.g., a cellular network (e.g., Global System for Mobile Communications (GSM), Long-Term Evolution (LTE), 5G, etc.), a telephony network, a digital
  • the network 102 may serve as a centralized network and, in some embodiments, may be communicatively coupled to another network (e.g., the Internet). Accordingly, the network 102 may include a variety of other virtual and/or physical network computing devices (e.g., routers, switches, network hubs, servers, storage devices, compute devices, etc.), as needed to facilitate the transmission of network traffic through the network 102 .
  • the network 102 may include a variety of other virtual and/or physical network computing devices (e.g., routers, switches, network hubs, servers, storage devices, compute devices, etc.), as needed to facilitate the transmission of network traffic through the network 102 .
  • the network compute device 106 establishes an environment 300 for hair-pinning network traffic local to the network compute device 106 during operation.
  • the illustrative environment 300 includes VMs 302 , a driver 304 , and a network traffic ingress/egress manager 320 , as well as the NIC 212 of FIG. 2 .
  • the various components of the environment 300 may be embodied as hardware, firmware, software, or a combination thereof. It should be appreciated that, in such embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another.
  • one or more of the components of the environment 300 may be embodied as virtualized hardware components or emulated architecture, which may be established and maintained by the compute engine or other components of the device edge network computing device 104 .
  • the device edge network computing device 104 may include other components, sub-components, modules, sub-modules, logic, sub-logic, and/or devices commonly found in a computing device, which are not illustrated in FIG. 3 for clarity of the description.
  • the illustrative virtual machines 302 includes a first VM 302 designated as VM (1) 302 a and a second VM 302 designated as VM 302 b , each of which are configured to perform certain operations or services. It should be appreciated that while the illustrative VMs 302 only includes two VMs 302 , the network compute device 106 may include multiple additional VMs in other embodiments. While illustratively shown as VMs 302 , the network compute device 106 may include multiple containers in addition to or as an alternative to the VMs 302 , in other embodiments.
  • the driver 304 may be embodied as any type of device driver capable of performing the functions described herein, including managing the operational configuration of the VEB hairpin agent 314 . It should be appreciated that, unlike present technologies in which a VEB agent is configured by a driver at the MAC layer, the driver 304 is configured to operate in VEPA mode and manage the configuration of the VEB hairpin agent 314 local to the accelerator device 310 .
  • the illustrative NIC 212 includes a MAC 306 with a VEPA agent 308 , and an accelerator device 310 .
  • the MAC 306 may be embodied as any type of software, hardware, firmware, or any combination thereof (e.g., MAC circuitry 306 ) capable of performing the functions described herein at the MAC sublayer.
  • the VEPA agent 308 is configured to perform VEPA related routing functionality consistent with typical VEPA operation at the MAC layer.
  • the illustrative accelerator device 310 includes an agent 312 and a VEB hairpin agent 314 .
  • the accelerator device 310 may be embodied as any type of specialized hardware on which operations can be performed faster and/or more efficiently than is possible on a more general-purpose processor.
  • the accelerator device 310 may be embodied as, but not limited to, an FPGA, an ASIC, or other specialized circuitry.
  • the VEB hairpin agent 314 may be executed on a general purpose processor capable of performing any bump in the wire offload solution for processing east-west network traffic.
  • the agent 312 is configured to function as an interfacing agent between the VEPA agent 308 and the VEB hairpin agent 314 (i.e., consistent with behavior exhibited for north-south network traffic). Accordingly, the agent 312 is configured to execute in both the ingress and egress directions.
  • the VEB hairpin agent 314 is configured to return the applicable network traffic to the VEPA agent 308 (i.e., via the agent 312 ). To do so, the VEB hairpin agent 314 is configured to track identifying data of the VMs 302 such that the VEB hairpin agent 314 can determine which network traffic is to be hairpinned. In some embodiments, the identifying data may be stored in the routing data.
  • the various components of the environment 300 may be embodied as hardware, firmware, software, or a combination thereof.
  • one or more of the VEPA agent 308 , the agent 312 , the VEB hairpin agent 314 , and the network traffic ingress/egress manager 320 may be embodied as circuitry or collection of electrical devices (e.g., VEPA agent circuitry 308 , accelerator device agent circuitry 312 , VEB hairpin agent circuitry 314 , network traffic ingress/egress management circuitry 320 , etc.).
  • one or more of the VEPA agent circuitry 308 , the accelerator device agent circuitry 312 , the VEB hairpin agent circuitry 314 , and the network traffic ingress/egress management circuitry 320 may form a portion of one or more of the compute engine 200 , or more particularly processor 202 (i.e., core) thereof, the I/O subsystem 206 , the communication circuitry 210 , or more particularly the NIC 212 as illustratively shown, and/or other components of the network compute device 106 .
  • the network compute device 106 additionally includes routing data 316 for storing routing information and VM data 318 for storing data corresponding to the VMs 302 (e.g., configuration information, an associated Internet Protocol (IP) address, etc.), each of which may be accessed by the various components and/or sub-components of the network compute device 106 .
  • routing data 316 for storing routing information
  • VM data 318 for storing data corresponding to the VMs 302 (e.g., configuration information, an associated Internet Protocol (IP) address, etc.), each of which may be accessed by the various components and/or sub-components of the network compute device 106 .
  • IP Internet Protocol
  • the data stored in, or otherwise represented by, each of the routing data 316 and the VM data 318 may not be mutually exclusive relative to each other.
  • data stored in the routing data 316 may also be stored as a portion of the VM data 318 .
  • the various data utilized by the network compute device 106 is described herein as particular discret
  • the VEPA agent 308 and the VEB hairpin agent 314 are configured to access the routing data 316 and the VM data 318 .
  • the VEPA agent 308 is configured to identify a target destination for the network packets received from the VMs 302 .
  • the VEPA agent 308 may be configured to utilize information associated with a flow of the network packet, a workload type of the network packet, an originating source of the network packet, an output of a packet processing operation performed on the received network packet, and/or other information of the network packet, to identify the target destination for the network packet.
  • Such identifying target destination identifying information may be stored in and/or retrieved from the routing data 316 and/or the VM data 318 .
  • the VEPA agent 308 is configured to insert a target destination identifier (e.g., a corresponding IP Address) into the network packets received from the VMs 302 .
  • a target destination identifier e.g., a corresponding IP Address
  • the VEB hairpin agent 314 is configured to be notified or otherwise be capable of identifying when a new VM is instantiated, as well as identifying information thereof, which may be stored in the VM data 318 .
  • the VM data 318 may include additional and/or alternative VM configuration information, such as may be usable by the VM instances 302 .
  • at least portion of the VM data 318 and/or the routing data 316 may be stored local to the host (i.e., external to the NIC 212 ) in a direct-memory accessible storage location.
  • the VEB hairpin agent 314 is configured to determine which network packets are to be routed external to the network compute device 106 and which network packets are to be hairpinned based on the target destination identifier of the network packet (e.g., as inserted by the VEPA agent 308 ). To do so, the VEB hairpin agent 314 is configured to determine whether the target destination corresponds to an external computing device (e.g., accessible via the switch 104 ) or to a local destination (e.g., one of the VMs 302 ) of the network compute device 106 . Accordingly, the VEB hairpin agent 314 is further configured to access routing information, which may be stored in the routing data 316 in some embodiments, and/or the VM data 318 in other embodiments.
  • routing information may be stored in the routing data 316 in some embodiments, and/or the VM data 318 in other embodiments.
  • the VEB hairpin agent 314 is configured to determine whether to hairpin a network packet based on a target destination (e.g., as identifiable using an IP address) of the network packet and data stored in the routing data 316 and/or the VM data 318 .
  • a target destination e.g., as identifiable using an IP address
  • the network traffic ingress/egress manager 320 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to receive inbound and route/transmit outbound network traffic. To do so, the network traffic ingress/egress manager 320 is configured to facilitate inbound/outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the network compute device 106 .
  • inbound/outbound network communications e.g., network traffic, network packets, network flows, etc.
  • the network traffic ingress/egress manager 320 is configured to manage (e.g., create, modify, delete, etc.) connections to physical and virtual network ports (i.e., virtual network interfaces) of the network compute device 106 , as well as the ingress/egress buffers/queues associated therewith.
  • FIG. 4 an embodiment of a communication flow 400 for hairpinning a network packet that includes the VM (1) 302 a , VEPA agent 308 , the accelerator device agent 312 , the VEB hairpin agent 314 , and the VM (2) 302 b of the network compute device 106 of FIG. 3 .
  • the illustrative communication flow 400 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment.
  • the VM (1) 302 a completes a packet processing operation that results in an output of a network packet intended for transmission to a target destination.
  • the target destination may be an external compute device (e.g., another network compute device 106 accessible via the switch 104 ) or the same network compute device 106 (e.g., another VM 302 or container (not shown) of the network compute device 106 ). Accordingly, it should be further appreciated that the output network packet includes information and/or characteristics usable to identify the target destination
  • the VM (1) 302 a transmits the output network packet to the VEPA agent 308 .
  • the VEPA agent 308 updates the network packet to include an identifier of the target VM based on the target destination identifying information. It should be appreciated that, for the purposes of the illustrative communication flow 400 , the target destination identifying information of the network packet indicates that the network packet is to be hairpinned and identifies which VM 302 the network packet is to be sent to next (e.g., in a series of VMs for running virtual functions in a service chain).
  • the VEPA agent 308 transmits the updated network packet to the accelerator device agent 312 .
  • the accelerator device agent 312 Upon receipt, the accelerator device agent 312 forwards the network packet to the VEB hairpin agent 314 .
  • the VEB hairpin agent 314 identifies that the target destination resides on the same host (i.e., the target destination corresponds to another VM on the network compute device 106 ).
  • the VEB hairpin agent 314 returns the network packet to the accelerator device agent 312 .
  • the VEB hairpin agent 314 forwards the network packet to the VEPA agent 308 .
  • the VEPA agent 308 identifies the target VM 302 based on the target destination identifier.
  • the target destination identifier corresponds to the IP address of VM (2) 302 b .
  • the VEPA agent 308 transmits the network packet to the identified target VM (i.e., the VM (2) 302 b ).
  • the VM (2) 302 b performs some packet processing operation on at least a portion of the network packet.
  • An embodiment of the technologies disclosed herein may include any one or more, and any combination of, the examples described below.
  • Example 1 includes a compute device for hairpinning network traffic, the compute device comprising a compute engine to manage a plurality of virtual machines of the compute device; and a network interface controller (NIC) configured to receive, by a virtual Ethernet port aggregator (VEPA) agent of the NIC, a network packet from a first virtual machine of the plurality of virtual machines, wherein the network packet is to be transmitted to a target destination for additional processing; transmit, by the VEPA agent, the received network packet to an agent deployed on an accelerator device of the NIC; forward, by the agent deployed on the accelerator device, the received network packet to a virtual Ethernet bridge (VEB) hairpin agent of the accelerator device; determine, by the VEB hairpin agent, whether the target destination of the network packet corresponds to a second virtual machine of the plurality of virtual machines; and return, by the VEB hairpin agent and in response to a determination that the target destination of the network packet corresponds to the second virtual machine of the plurality of virtual machines, the received network packet to the agent deployed the accelerator device.
  • VEPA virtual
  • Example 2 includes the subject matter of Example 1, and wherein the NIC is further configured to forward, by the agent deployed the accelerator device, the received network packet to the VEPA agent; identify, by the VEPA agent, the second virtual machine; and transmit, by the VEPA agent, the received network packet to the identified second virtual machine.
  • Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the VEPA agent is further configured to identify, prior to transmission of the network packet to the agent deployed on the accelerator agent, an internet protocol (IP) address corresponding to the target destination; and update at least a portion of the received network packet to include the identified IP address corresponding to the target destination.
  • IP internet protocol
  • Example 4 includes the subject matter of any of Examples 1-3, and wherein to identify the IP address of the target destination comprises to identify the IP address of the second virtual machine, and wherein to identify the second virtual machine comprises to identify the second virtual machine based on the IP address.
  • Example 5 includes the subject matter of any of Examples 1-4, and wherein to identify the IP address of the second virtual machine comprises to identify the IP address of the second virtual machine based on a flow associated with the received network packet, a workload type associated with the received network packet, an originating source of the received network packet, or an output of a packet processing operation performed on the received network packet.
  • Example 6 includes the subject matter of any of Examples 1-5, and further including a driver to configure the VEB hairpin agent.
  • Example 7 includes the subject matter of any of Examples 1-6, and wherein the driver is further configured to operate in VEPA mode.
  • Example 8 includes the subject matter of any of Examples 1-7, and wherein the VEPA agent is included in a MAC of the NIC.
  • Example 9 includes the subject matter of any of Examples 1-8, and wherein the VEB hairpin agent is further to receive an indication that each of the plurality of virtual machines has been instantiated, wherein the indication includes a corresponding IP address, and wherein to determine whether the target destination of the network packet corresponds to another virtual machine of the plurality of virtual machines presently executing on the network compute device comprises to make the determination as a function of an internet protocol (IP) address of the received network packet.
  • IP internet protocol
  • Example 10 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a compute device to receive, by a virtual Ethernet port aggregator (VEPA) agent of a media access control (MAC) of a network interface controller (NIC), a network packet from a first virtual machine of a plurality of virtual machines of the compute device, wherein the network packet is to be transmitted to a target destination for additional processing; transmit, by the VEPA agent, the received network packet to an agent deployed on an accelerator device of the NIC; forward, by the agent deployed on the accelerator device, the received network packet to a virtual Ethernet bridge (VEB) hairpin agent of the accelerator device; determine, by the VEB hairpin agent, whether a target destination of the network packet corresponds to a second virtual machine of the plurality of virtual machines; and return, by the VEB hairpin agent and in response to a determination that the target destination of the network packet corresponds to the second virtual machine of the plurality of virtual machines, the received network packet to the agent deployed the accelerator
  • Example 11 includes the subject matter of Example 10, and wherein the compute device is further to forward, by the agent deployed the accelerator device, the received network packet to the VEPA agent; identify, by the VEPA agent, the second virtual machine; and transmit, by the VEPA agent, the received network packet to the identified second virtual machine.
  • Example 12 includes the subject matter of any of Examples 10 and 11, and wherein the VEPA agent is further to identify, prior to transmission of the network packet to the agent deployed on the accelerator agent, an internet protocol (IP) address corresponding to the target destination; and update at least a portion of the received network packet to include the identified IP address corresponding to the target destination.
  • IP internet protocol
  • Example 13 includes the subject matter of any of Examples 10-12, and wherein to identify the IP address of the target destination comprises to identify the IP address of the second virtual machine, and wherein to identify the second virtual machine comprises to identify the second virtual machine based on the IP address.
  • Example 14 includes the subject matter of any of Examples 10-13, and wherein to identify the IP address of the second virtual machine comprises to identify the IP address of the second virtual machine based on a flow associated with the received network packet, a workload type associated with the received network packet, an originating source of the received network packet, or an output of a packet processing operation performed on the received network packet.
  • Example 15 includes the subject matter of any of Examples 10-14, and wherein the plurality of instructions further cause the compute device to configure the VEB hairpin agent via a driver of the compute device.
  • Example 16 includes the subject matter of any of Examples 10-15, and wherein the driver is further to operate in VEPA mode.
  • Example 17 includes the subject matter of any of Examples 10-16, and wherein the VEB hairpin agent is further to receive an indication that each of the plurality of virtual machines has been instantiated, wherein the indication includes a corresponding IP address, and wherein to determine whether the target destination of the network packet corresponds to another virtual machine of the plurality of virtual machines presently executing on the network compute device comprises to make the determination as a function of an internet protocol (IP) address of the received network packet.
  • IP internet protocol
  • Example 18 includes a network interface controller (NIC), the NIC comprising accelerator device circuitry; and media access control (MAC) circuitry configured to receive, by a virtual Ethernet port aggregator (VEPA) agent deployed on the MAC circuitry, a network packet from a first virtual machine of the plurality of virtual machines, wherein the network packet is to be transmitted to a target destination for additional processing, and transmit the received network packet to an agent deployed on the accelerator device circuitry of the NIC, wherein the accelerator device circuitry is configured to forward, by the agent deployed on the accelerator device circuitry, the received network packet to a virtual Ethernet bridge (VEB) hairpin agent of the accelerator device circuitry, determine, by the VEB hairpin agent, whether the target destination of the network packet corresponds to a second virtual machine of the plurality of virtual machines, and return, by the VEB hairpin agent and in response to a determination that the target destination of the network packet corresponds to the second virtual machine of the plurality of virtual machines, the received network packet to the agent deployed the accelerator device.
  • VEPA virtual Ethernet port
  • Example 19 includes the subject matter of Example 18, and wherein the accelerator device circuitry is further configured to forward, by the agent, the received network packet to the VEPA agent, and wherein the MAC circuitry is further to (i) identify, by the VEPA agent, the second virtual machine and (ii) transmit, by the VEPA agent, the received network packet to the identified second virtual machine.
  • Example 20 includes the subject matter of any of Examples 18 and 19, and wherein the accelerator device circuitry is further configured to identify, by the VEPA agent and prior to transmission of the network packet to the agent deployed on the accelerator agent, an internet protocol (IP) address corresponding to the target destination; and update, by the VEPA agent, at least a portion of the received network packet to include the identified IP address corresponding to the target destination.
  • IP internet protocol
  • Example 21 includes the subject matter of any of Examples 18-20, and wherein to identify the IP address of the target destination comprises to identify the IP address of the second virtual machine, wherein to identify the second virtual machine comprises to identify the second virtual machine based on the IP address, and wherein to identify the IP address of the second virtual machine comprises to identify the IP address of the second virtual machine based on one of a flow associated with the received network packet, a workload type associated with the received network packet, an originating source of the received network packet, or an output of a packet processing operation performed on the received network packet.
  • Example 22 includes network interface controller (NIC), the NIC comprising means for receiving, by a virtual Ethernet port aggregator (VEPA) agent of the NIC, a network packet from a first virtual machine of a plurality of virtual machines of the compute device, wherein the network packet is to be transmitted to a target destination for additional processing; means for transmitting, by the VEPA agent, the received network packet to an agent deployed on an accelerator device of the NIC; means for forwarding, by the agent deployed on the accelerator device, the received network packet to a virtual Ethernet bridge (VEB) hairpin agent of the accelerator device; means for determining, by the VEB hairpin agent, whether a target destination of the network packet corresponds to a second virtual machine of the plurality of virtual machines; and means for returning, by the VEB hairpin agent and in response to a determination that the target destination of the network packet corresponds to the second virtual machine of the plurality of virtual machines, the received network packet to the agent deployed the accelerator device.
  • VEPA virtual Ethernet port aggregator
  • Example 23 includes the subject matter of Example 22, and further including means for forwarding, by the agent deployed the accelerator device, the received network packet to the VEPA agent; means for identifying, by the VEPA agent, the second virtual machine; and means for transmitting, by the VEPA agent, the received network packet to the identified second virtual machine.
  • Example 24 includes the subject matter of any of Examples 22 and 23, and further including means for identifying, by the VEPA agent and prior to transmission of the network packet to the agent deployed on the accelerator agent, an internet protocol (IP) address corresponding to the target destination; and means for updating, by the VEPA agent at least a portion of the received network packet to include the identified IP address corresponding to the target destination.
  • IP internet protocol
  • Example 25 includes the subject matter of any of Examples 18-24, and further including means for receiving, by the VEB hairpin agent, an indication that each of the plurality of virtual machines has been instantiated, wherein the indication includes a corresponding IP address, and wherein the means for determining whether the target destination of the network packet corresponds to another virtual machine of the plurality of virtual machines presently executing on the network compute device comprises means for making the determination as a function of an internet protocol (IP) address of the received network packet.
  • IP internet protocol

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
US16/023,771 2018-06-29 2018-06-29 Technologies for hairpinning network traffic Active 2040-01-15 US11271775B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/023,771 US11271775B2 (en) 2018-06-29 2018-06-29 Technologies for hairpinning network traffic
EP19176608.8A EP3588869B1 (de) 2018-06-29 2019-05-24 Technologien zum hairpinning von netzwerkverkehr
CN201910451623.3A CN110661690A (zh) 2018-06-29 2019-05-28 用于发夹式传输网络业务的技术

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/023,771 US11271775B2 (en) 2018-06-29 2018-06-29 Technologies for hairpinning network traffic

Publications (2)

Publication Number Publication Date
US20190052480A1 US20190052480A1 (en) 2019-02-14
US11271775B2 true US11271775B2 (en) 2022-03-08

Family

ID=65275671

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/023,771 Active 2040-01-15 US11271775B2 (en) 2018-06-29 2018-06-29 Technologies for hairpinning network traffic

Country Status (3)

Country Link
US (1) US11271775B2 (de)
EP (1) EP3588869B1 (de)
CN (1) CN110661690A (de)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11271775B2 (en) * 2018-06-29 2022-03-08 Intel Corporation Technologies for hairpinning network traffic
AT521914B1 (de) * 2018-12-13 2020-10-15 Avl List Gmbh Kommunikationsmodul
US11436053B2 (en) * 2019-05-24 2022-09-06 Microsoft Technology Licensing, Llc Third-party hardware integration in virtual networks
CN114363248B (zh) * 2020-09-29 2023-04-07 华为技术有限公司 计算系统、加速器、交换平面及聚合通信方法
CN116866283A (zh) * 2020-10-31 2023-10-10 华为技术有限公司 一种流表处理方法及相关设备
CN117692382B (zh) * 2024-02-04 2024-06-07 珠海星云智联科技有限公司 链路聚合方法、网卡、设备以及介质

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095047A1 (en) * 2006-06-29 2008-04-24 Nortel Networks Limited Method and system for looping back traffic in qiq ethernet rings and 1:1 protected pbt trunks
US20120016970A1 (en) * 2010-07-16 2012-01-19 Hemal Shah Method and System for Network Configuration and/or Provisioning Based on Open Virtualization Format (OVF) Metadata
US20120027014A1 (en) * 2009-03-06 2012-02-02 Futurewei Technologies, Inc. Transport Multiplexer-Mechanisms to Force Ethernet Traffic From One Domain to Be Switched in a Different (External) Domain
US20120291025A1 (en) * 2011-05-13 2012-11-15 International Business Machines Corporation Techniques for operating virtual switches in a virtualized computing environment
US20120317566A1 (en) * 2011-06-07 2012-12-13 Santos Jose Renato G Virtual machine packet processing
US20140294009A1 (en) * 2013-03-29 2014-10-02 Sony Corporation Communication apparatus, communication system, control method of communication apparatus and program
US20150055658A1 (en) * 2013-08-20 2015-02-26 International Business Machines Corporation Reflective relay processing on logical ports for channelized links in edge virtual bridging systems
US20150358231A1 (en) * 2013-02-28 2015-12-10 Hangzhou H3C Technologies Co., Ltd. Vepa switch message forwarding
US20160119256A1 (en) * 2013-06-27 2016-04-28 Hangzhou H3C Technologies Co., Ltd. Distributed virtual switch system
CN105610737A (zh) * 2016-01-25 2016-05-25 盛科网络(苏州)有限公司 基于OpenFlow的hairpin交换机实现方法及hairpin交换机系统
US20160232019A1 (en) * 2015-02-09 2016-08-11 Broadcom Corporation Network Interface Controller with Integrated Network Flow Processing
US20160315879A1 (en) * 2015-04-21 2016-10-27 Verizon Patent And Licensing Inc. Virtual node having separate control and data planes
US20190052480A1 (en) * 2018-06-29 2019-02-14 Intel Corporation Technologies for hairpinning network traffic

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095047A1 (en) * 2006-06-29 2008-04-24 Nortel Networks Limited Method and system for looping back traffic in qiq ethernet rings and 1:1 protected pbt trunks
US20120027014A1 (en) * 2009-03-06 2012-02-02 Futurewei Technologies, Inc. Transport Multiplexer-Mechanisms to Force Ethernet Traffic From One Domain to Be Switched in a Different (External) Domain
US20120016970A1 (en) * 2010-07-16 2012-01-19 Hemal Shah Method and System for Network Configuration and/or Provisioning Based on Open Virtualization Format (OVF) Metadata
US20120291025A1 (en) * 2011-05-13 2012-11-15 International Business Machines Corporation Techniques for operating virtual switches in a virtualized computing environment
US20120317566A1 (en) * 2011-06-07 2012-12-13 Santos Jose Renato G Virtual machine packet processing
US20150358231A1 (en) * 2013-02-28 2015-12-10 Hangzhou H3C Technologies Co., Ltd. Vepa switch message forwarding
US20140294009A1 (en) * 2013-03-29 2014-10-02 Sony Corporation Communication apparatus, communication system, control method of communication apparatus and program
US20160119256A1 (en) * 2013-06-27 2016-04-28 Hangzhou H3C Technologies Co., Ltd. Distributed virtual switch system
US20150055658A1 (en) * 2013-08-20 2015-02-26 International Business Machines Corporation Reflective relay processing on logical ports for channelized links in edge virtual bridging systems
US20160232019A1 (en) * 2015-02-09 2016-08-11 Broadcom Corporation Network Interface Controller with Integrated Network Flow Processing
US20160315879A1 (en) * 2015-04-21 2016-10-27 Verizon Patent And Licensing Inc. Virtual node having separate control and data planes
CN105610737A (zh) * 2016-01-25 2016-05-25 盛科网络(苏州)有限公司 基于OpenFlow的hairpin交换机实现方法及hairpin交换机系统
US20190052480A1 (en) * 2018-06-29 2019-02-14 Intel Corporation Technologies for hairpinning network traffic
EP3588869A1 (de) * 2018-06-29 2020-01-01 INTEL Corporation Technologien zum hairpinning von netzwerkverkehr

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
European First Office Action for Patent Application No. 19176608.8, dated Nov. 27, 2020, 4 pages.
Extended European search report for European patent application No. 19176608.8, dated Oct. 9, 2019 (7 pages).

Also Published As

Publication number Publication date
EP3588869B1 (de) 2021-11-17
CN110661690A (zh) 2020-01-07
EP3588869A1 (de) 2020-01-01
US20190052480A1 (en) 2019-02-14

Similar Documents

Publication Publication Date Title
US11271775B2 (en) Technologies for hairpinning network traffic
US11706158B2 (en) Technologies for accelerating edge device workloads
US11681565B2 (en) Technologies for hierarchical clustering of hardware resources in network function virtualization deployments
EP3629162B1 (de) Technologien zur steuerungsebenentrennung in einem netzwerkschnittstellensteuergerät
US20230412459A1 (en) Technologies for dynamically selecting resources for virtual switching
EP3624400B1 (de) Technologien zum einsatz virtueller maschinen in einer virtuellen netzfunktionsinfrastruktur
US20220294885A1 (en) Technologies for network packet processing between cloud and telecommunications networks
US10142231B2 (en) Technologies for network I/O access
US11593140B2 (en) Smart network interface card for smart I/O
US10305805B2 (en) Technologies for adaptive routing using aggregated congestion information
US11669468B2 (en) Interconnect module for smart I/O
US11646980B2 (en) Technologies for packet forwarding on ingress queue overflow
US9590855B2 (en) Configuration of transparent interconnection of lots of links (TRILL) protocol enabled device ports in edge virtual bridging (EVB) networks
US20180309689A1 (en) Latency reduction in service function paths
US20190044799A1 (en) Technologies for hot-swapping a legacy appliance with a network functions virtualization appliance
US10601738B2 (en) Technologies for buffering received network packet data
US11412059B2 (en) Technologies for paravirtual network device queue and memory management
US10554513B2 (en) Technologies for filtering network packets on ingress
US11283723B2 (en) Technologies for managing single-producer and single consumer rings
US20180091447A1 (en) Technologies for dynamically transitioning network traffic host buffer queues
CN112751766A (zh) 报文转发方法、装置及计算机存储介质
US20180351812A1 (en) Technologies for dynamic bandwidth management of interconnect fabric

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCT Information on status: administrative procedure adjustment

Free format text: PROSECUTION SUSPENDED

AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SKIDMORE, DONALD;HAY, JOSHUA;JAIN, ANJALI SINGHAI;AND OTHERS;SIGNING DATES FROM 20180620 TO 20180905;REEL/FRAME:046834/0431

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNMENT AGREEMENT PREVIOUSLY RECORDED ON REEL 046834 FRAME 0431. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:SKIDMORE, DONALD;HAY, JOSHUA;JAIN, ANJALI SINGHAI;AND OTHERS;SIGNING DATES FROM 20180620 TO 20180905;REEL/FRAME:057127/0816

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE

STCF Information on status: patent grant

Free format text: PATENTED CASE