US20190081904A1 - Maintaining packet order in offload of packet processing functions - Google Patents
Maintaining packet order in offload of packet processing functions Download PDFInfo
- Publication number
- US20190081904A1 US20190081904A1 US15/701,459 US201715701459A US2019081904A1 US 20190081904 A1 US20190081904 A1 US 20190081904A1 US 201715701459 A US201715701459 A US 201715701459A US 2019081904 A1 US2019081904 A1 US 2019081904A1
- Authority
- US
- United States
- Prior art keywords
- data packets
- virtual machine
- packets
- instruction
- nic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/622—Queue service order
- H04L47/6235—Variable service order
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/20—Traffic policing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/74—Address processing for routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/70—Virtual switches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9063—Intermediate storage in different physical parts of a node or terminal
- H04L49/9068—Intermediate storage in different physical parts of a node or terminal in the network interface card
Definitions
- the present invention relates generally to computer networks, and particularly to devices and methods for interfacing between host computers and a network.
- a network interface controller is a device that manages and transfers communications between a host computer (referred to alternatively simply as a “host”) and a network, such as a local area network or switch fabric.
- the NIC directs packets from the network to their destination in the computer, for example by placing the packets in a buffer of a destination application in the computer memory, and directs outgoing packets, for example sending them either to the network or to a loopback port.
- NIC virtual machine monitor
- SR-IOV single-root I/O virtualization
- the vNIC links the VM to other machines (virtual and/or physical) on a network, possibly including other virtual machines running on the same host.
- the NIC acts as a virtual switch, connecting each of the virtual machines to a network while allowing multiple vNICs to share the same physical network port.
- NICs that support the SR-IOV model are known in the art.
- U.S. Patent Application Publication 2014/0185616 whose disclosure is incorporated herein by reference, describes a NIC that supports multiple virtualized (tenant) networks overlaid on a data network.
- the NIC Upon receiving a work item submitted by a virtual machine running on a host processor, the NIC identifies the tenant network over which the virtual machine is authorized to communicate, generates a data packet containing an encapsulation header that is associated with the tenant network, and transmits the data packet over the network.
- the NIC may also decapsulate encapsulated data packets received from the data network and convey the decapsulated data packets to the virtual machine.
- Embodiments of the present invention that are described hereinbelow provide improved network interface devices and methods for processing packets received by a host computer from a network.
- network interface apparatus including a host interface for connection to a host processor having a memory, and a network interface, which is configured to receive over a network data packets in multiple packet flows destined for one or more virtual machines running on the host processor.
- Packet processing circuitry is coupled between the network interface and the host interface and is configured to pass the data packets to a virtual machine monitor (VMM) running on the host processor for preprocessing of the packets by the VMM, which delivers the preprocessed packets to the one or more virtual machines.
- VMM virtual machine monitor
- the packet processing circuitry is configured to receive a first instruction to offload from the VMM preprocessing of the data packets in a specified flow in accordance with a specified rule, and responsively to the first instruction to initiate preprocessing the data packets in the specified flow by the packet processing circuitry in accordance with the specified rule while writing one or more initial data packets from the specified flow to a temporary buffer, and upon subsequently receiving a second instruction to enable the specified rule, to deliver the initial data packets from the temporary buffer, after preprocessing by the packet processing circuitry, directly to a virtual machine to which the specified flow is destined, and after delivering the preprocessed initial data packets, to continue preprocessing and delivering subsequent data packets in the specified flow to the virtual machine.
- the first instruction causes the packet processing circuitry to modify headers of the data packets in the specified flow.
- the packet processing circuitry is configured to deliver the initial and subsequent data packets to the virtual machine in accordance with an order in which the data packets were received from the network, such that the subsequent data packets are delivered to the virtual machine only after delivery to the virtual machine of all the data packets in the temporary buffer.
- the packet processing circuitry is configured to write to the temporary buffer any of the subsequent data packets that are received from the network before the temporary buffer has been emptied.
- the packet processing circuitry in configured, in response to the first instruction, to verify that all of the data packets already received through the network interface in the specified flow have been passed to the VMM, and then to submit an acknowledgment to the VMM that the first instruction was received by the packet processing circuitry.
- the VMM issues the second instruction upon receiving the acknowledgment.
- the packet processing circuitry includes a transmit pipe, for processing outgoing packets for transmission to the network, and a receive pipe, for processing incoming data packets received from the network, and the packet processing circuitry is configured to deliver the initial preprocessed data packets from the temporary buffer to the virtual machine by loopback from the temporary buffer through the transmit pipe to the receive pipe, which writes the preprocessed data packets to another buffer in the memory that is assigned to the virtual machine.
- a method for communication which includes receiving in a network interface controller (NIC) over a network data packets in multiple packet flows destined for one or more virtual machines running on a host processor coupled to the NIC.
- the data packets are passed from the NIC to a virtual machine monitor (VMM) running on the host processor for preprocessing of the packets by the VMM, which delivers the preprocessed packets to the one or more virtual machines.
- VMM virtual machine monitor
- the NIC receives a first instruction to offload from the VMM preprocessing of the data packets in a specified flow in accordance with a specified rule and responsively to the first instruction, initiates preprocessing the data packets in the specified flow by the NIC in accordance with the specified rule.
- the NIC After receiving the first instruction, the NIC writes one or more initial data packets from the NIC to a temporary buffer. Upon subsequently receiving a second instruction to enable the specified rule, the NIC delivers the initial data packets, after preprocessing by the NIC, from the temporary buffer directly to a virtual machine to which the specified flow is destined. After delivering the preprocessed initial data packets, the NIC continues to preprocess and deliver subsequent data packets in the specified flow to the virtual machine.
- FIG. 1 is a block diagram that schematically illustrates a computer with a NIC, in accordance with an embodiment of the present invention
- FIG. 2 is a ladder diagram that schematically illustrates a method for offload to a NIC of a processing function applied to a packet flow, in accordance with an embodiment of the invention.
- FIG. 3 is a block diagram that schematically illustrates processing and delivery of packets to a destination virtual machine in accordance with the method shown in FIG. 2 .
- VMM In many practical applications in which virtual machines run on a server, packets transmitted between a network and the virtual machines are handled, as a default, by the VMM, even when the NIC has SR-IOV and/or other offload capabilities.
- the VMM preprocesses the packets by applying different rules to different packet flows, typically involving modifications to the headers of the data packets, and then delivers the preprocessed packets to the destination virtual machines.
- the VMM may decide to offload these preprocessing functions to the NIC and thus reduce the processing load on the host processor.
- a flow in the context of the present description and the claims, refers to a sequence of packets transmitted from a given source to a specified destination.
- the flow can be identified, for example, by the Internet Protocol (IP) 5-tuple of header fields, comprising the source and destination addresses, source and destination ports, and protocol identifier.
- IP Internet Protocol
- a flow can be identified by the queue pair (QP) number in the packet transport header.
- the virtual machine When the VMM decides to offload preprocessing of a given flow to the NIC, the virtual machine will subsequently receive and transmit packets directly via the NIC, without additional processing by the VMM.
- the transition from VMM-based to NIC-based preprocessing should ideally be transparent to the virtual machine and should take place without loss or delay of packets that have already been transmitted. Because of the high processing speed of the NIC, however, when the VMM initiates an offload in the middle of a given flow, the NIC may begin delivering preprocessed incoming packets in the flow to the virtual machine before the VMM has finished preprocessing and delivered the last of the packets that were received before the offload was initiated. The virtual machine will consequently receive packets out of order.
- the virtual machines can be configured to handle out-of-order packets in software, but this solution similarly increases latency and adds to the load on the host processor.
- Embodiments of the present invention that are described herein address this problem by coordination between the VMM and the NIC, in a manner that is transparent both to the sender of the flow in question and to the virtual machine receiving the flow and avoids any degradation of communication bandwidth or latency.
- These embodiments use a new two-stage mechanism, in which the VMM first sends an instruction to the NIC to initiate preprocessing of the data packets in a specified flow in accordance with a specified rule. The NIC prepares to apply the rule and sends an acknowledgment to the VMM.
- VMM Only after having emptied its own queue of incoming packets in the flow, however, does the VMM send a second instruction to the NIC to enable the rule, i.e., to begin passing preprocessed packets to the virtual machine to which the flow is destined.
- the NIC After receiving the first instruction, the NIC prepares to begin preprocessing the specified flow and temporarily buffers any incoming packets in the flow. After receiving the second instruction, the NIC first empties the temporary buffer and passes the buffered packets (after preprocessing) to the virtual machine. Once the temporary buffer is empty, the NIC continues preprocessing incoming packets in the specified flow, and delivers subsequent packets directly to the virtual machine. Thus, all packets are preprocessed and delivered to the virtual machine in the proper order.
- FIG. 1 is a block diagram that schematically illustrates a computer 20 with a NIC 28 , in accordance with an embodiment of the present invention.
- Computer 20 comprises a host processor in the form of a central processing unit (CPU) 22 , with a memory 24 , typically comprising random-access memory (RAM).
- NIC 28 is connected to CPU 22 and memory 24 via a bus 26 , such as a Peripheral Component Interconnect Express® (PCIe®) bus, as is known in the art.
- PCIe® Peripheral Component Interconnect Express®
- NIC 28 couples computer 20 to a packet network 30 , such as an Ethernet, IP or InfiniBand network.
- Computer 20 supports a virtual machine environment, in which multiple virtual machines 34 (labeled VM1, VM2, VM3 in FIG. 1 ) may run on CPU 22 .
- the software running on CPU 22 including both operating system and application programs, may be downloaded to the CPU in electronic form, over a network for example. Additionally or alternatively, the software may be stored on tangible, non-transitory computer-readable media, such as optical, magnetic or electronic memory media, which may be embodied in memory 24 .
- CPU 22 operates a native domain 32 , with a host operating system 36 , which may support host user applications and other native processes.
- the CPU concurrently runs one or more virtual machines 34 , as noted above, each with its own guest operating system and guest user applications (omitted for the sake of simplicity).
- VMM 38 in native domain 32 interacts with the kernels of the guest operating systems of virtual machines 34 in a manner that emulates the host processor and allows the virtual machines to share the resources of CPU 22 .
- a wide range of virtual machine software of this sort is available commercially, and further description is beyond the scope of the present disclosure.
- the added capabilities of VMM 38 in terms of initiating and enabling offload of rules to NIC 28 , are described further hereinbelow, particularly with reference to FIGS. 2 and 3 .
- NIC 28 comprises a host interface 40 , for connection to CPU 22 and memory 24 via bus 26 , and a network interface 42 , comprising one or more ports connected to network 30 .
- Network interface 42 transmits and receives data packets in multiple packet flows from and to virtual machines 34 running on the CPU 22 .
- the packets are processed by packet processing circuitry 44 , which is coupled between host interface 40 and network interface 42 and comprises a receive (Rx) pipe 46 , for processing incoming data packets received from network 30 , and a transmit (Tx) pipe 48 , for processing outgoing packets for transmission to the network.
- Rx receive
- Tx transmit
- Rx pipe 46 passes the packets to VMM 38 , which preprocesses the packets in accordance with applicable rules and delivers the preprocessed packets in each flow to the destination virtual machine 34 .
- steering logic 50 identifies, for each incoming packet, the flow to which the packet belongs and the process running on CPU 22 to which the packet is to be delivered. In order to make this decision, steering logic 50 extracts a flow identifier from the packet, typically based on one or more packet header fields, such as the IP 5-tuple and/or a transport-layer value. Steering logic 50 looks up the flow in a database (not shown), which also indicates whether any preprocessing rules have been initiated and enabled on NIC 28 by VMM 38 . If so, a rule engine 52 preprocesses the packets in the flow, for example by modifying the packet headers (changing and/or removing or adding specified header fields). For flows for which preprocessing is not enabled, the incoming packets may bypass rule engine 52 .
- a scatter engine 54 in receive pipe 46 then writes the packets to respective buffers in memory 24 by direct memory access (DMA) over bus 26 .
- DMA direct memory access
- scatter engine 54 delivers the packets to VMM 38 for preprocessing and delivery to the appropriate destination.
- scatter engine 54 delivers the packet directly to the destination virtual machine 34 by writing the packet to a dedicated buffer 56 .
- Receive pipe 46 notifies the virtual machine that the packet is available for reading, for example by placing a completion report in a completion queue that is read by the virtual machine.
- scatter engine 54 may write one or more initial data packets from the specified flow to a temporary buffer 58 , typically without preprocessing the packets.
- Buffer 58 may conveniently be allocated in memory 24 , as shown in FIG. 1 .
- NIC may hold buffer 58 in a separate memory that is dedicated to the NIC.
- Receive pipe 46 delivers the packets to buffer 56 in the order in which the packets were received from network 30 . Thereafter, rule engine 52 will continue preprocessing further incoming packets in the specified flow, and scatter engine 54 will deliver these preprocessed packets in the proper order to buffer 56 .
- Tx pipe 48 comprises scheduling logic 60 , which arbitrates among transmission requests and can be configured to give priority to loopback requests from buffer 58 .
- a gather engine 62 reads the packets that are to be transmitted from memory 24 , and port selection logic 64 selects the port through which each packet is to be transmitted. Outgoing packets are transmitted via network interface 42 to network 30 .
- Loopback packets are returned to steering logic 50 for delivery to the appropriate destination process.
- Steering logic 50 thus ensures that the packets that were written to buffer 58 from a given flow (prior to enablement of rule engine 52 for the flow) are looped back to dedicated buffer in order, before preprocessing and writing any data packets received subsequently from network 30 .
- These subsequent data packets are written to dedicated buffer 56 only after delivery of all the data packets held in temporary buffer 58 for this flow.
- steering logic 50 will direct these packets, as well, to temporary buffer 58 , in order to ensure that proper ordering is maintained in writing packets to buffer 56 .
- FIGS. 2 and 3 schematically illustrate a method for offload to NIC 28 of a preprocessing function applied to a certain packet flow, in accordance with an embodiment of the invention.
- FIG. 2 is a ladder diagram illustrating communications exchanged among the elements of computer 20 in the carrying out this method
- FIG. 3 is a block diagram showing stages in the processing and delivery of packets to destination virtual machine 34 in accordance with the method.
- this example relates to a single flow for the sake of simplicity, in general NIC 28 receives and handles many flows concurrently, and may offload the preprocessing of multiple flows in parallel according to respective rules, which may differ from flow to flow.
- FIG. 1 Although this example is described, for the sake of concreteness and clarity, with reference to the specific hardware architecture of NIC 28 that is shown in FIG. 1 , the principles of the method of FIGS. 2 and 3 may similarly be implemented by other suitable sorts of network interface devices, as are known in the art. All such alternative implementations are considered to be within the scope of the present invention.
- VMM 38 performs the required preprocessing of packets in the flow shown in FIGS. 2 and 3 . Therefore, upon receiving incoming packets 70 , NIC 28 simply forwards corresponding packet data 72 to VMM 38 . VMM 38 preprocesses the packet in accordance with the applicable rule, and then delivers preprocessed packets 74 to the destination virtual machine 34 , for example by placing the packet data in the appropriate dedicated buffer 56 in memory 24 . This default procedure is applied to packets #1, #2 and #3 in FIGS. 2 and 3 .
- VMM 38 After preprocessing packet #1, however, VMM 38 concludes that preprocessing of this flow should be offloaded to NIC 28 . This decision can be based, for example, on a count or data rate of incoming packets in the flow, or on any other applicable criteria. Upon making the decision, VMM 38 sends an “update rule” instruction 76 to rule engine 52 in NIC 28 , instructing the rule engine to offload preprocessing of the data packets in this flow in accordance with a specified rule. In the meanwhile, until instruction 76 is received in NIC 28 , steering logic 50 continues to direct packet data 72 (corresponding to packets #2 and #3) to VMM 38 , and VMM 38 continues to preprocess and deliver packets 74 to buffer 56 .
- packet processing circuitry 44 verifies that all of the data packets already received through network interface 42 in this flow have been passed to VMM 38 , and then submits an acknowledgment 78 to the VMM to confirm that instruction 76 was received.
- steering logic 50 begins to direct packet data 82 from incoming packets 70 to temporary buffer 58 , as illustrated by packets #4 and #5. Steering logic 50 continues handling the flow in this manner until VMM 38 has received acknowledgment 78 and, in response, sends a “rule enable” instruction 80 to rule engine 52 .
- packet processing circuitry 44 Upon receiving instruction 80 , packet processing circuitry 44 begins looping back packet data 84 from temporary buffer 58 , through transmit pipe 48 , to steering logic 50 . Steering logic 50 now passes the looped-back packets to rule engine 52 for preprocessing in accordance with the rule specified by instruction 76 , and then directs corresponding preprocessed packets 86 to dedicated buffer 56 of the destination virtual machine 34 . Steering logic 50 passes subsequent data packets 70 in the flow, such as packets #6 and #7, to rule engine 52 only after delivery to the virtual machine of all the data packets belonging to this flow in temporary buffer 58 . In the pictured example, packet #6 reaches NIC 28 from network 30 before packet #5 has been emptied from temporary buffer 58 . Therefore, packet #6 is also written to and then looped back from temporary buffer 58 after packet #5. Packet #7 and subsequent packets in the flow, however, are preprocessed by rule engine 52 and written by scatter engine 54 directly to dedicated buffer 56 .
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- The present invention relates generally to computer networks, and particularly to devices and methods for interfacing between host computers and a network.
- A network interface controller (NIC) is a device that manages and transfers communications between a host computer (referred to alternatively simply as a “host”) and a network, such as a local area network or switch fabric. The NIC directs packets from the network to their destination in the computer, for example by placing the packets in a buffer of a destination application in the computer memory, and directs outgoing packets, for example sending them either to the network or to a loopback port.
- When a host computer supports multiple virtual machines (VMs), different approaches may be taken by the NIC in handling incoming and outgoing packets. In one approach, all packets are directed to a virtual machine monitor (VMM, also known as a hypervisor) running on the host, and the VMM directs the packets to the specific destination virtual machine. More recently, however, NICs have been developed with the capability of exposing multiple virtual NICs (vNICs) to software running on the host. In a model that is known as single-root I/O virtualization (SR-IOV), each VM interacts with its own corresponding vNIC, which appears to the VM to be a dedicated hardware NIC. The vNIC links the VM to other machines (virtual and/or physical) on a network, possibly including other virtual machines running on the same host. In this regard, the NIC acts as a virtual switch, connecting each of the virtual machines to a network while allowing multiple vNICs to share the same physical network port.
- A variety of NICs that support the SR-IOV model are known in the art. For example, U.S. Patent Application Publication 2014/0185616, whose disclosure is incorporated herein by reference, describes a NIC that supports multiple virtualized (tenant) networks overlaid on a data network. Upon receiving a work item submitted by a virtual machine running on a host processor, the NIC identifies the tenant network over which the virtual machine is authorized to communicate, generates a data packet containing an encapsulation header that is associated with the tenant network, and transmits the data packet over the network. The NIC may also decapsulate encapsulated data packets received from the data network and convey the decapsulated data packets to the virtual machine.
- Embodiments of the present invention that are described hereinbelow provide improved network interface devices and methods for processing packets received by a host computer from a network.
- There is therefore provided, in accordance with an embodiment of the invention, network interface apparatus, including a host interface for connection to a host processor having a memory, and a network interface, which is configured to receive over a network data packets in multiple packet flows destined for one or more virtual machines running on the host processor. Packet processing circuitry is coupled between the network interface and the host interface and is configured to pass the data packets to a virtual machine monitor (VMM) running on the host processor for preprocessing of the packets by the VMM, which delivers the preprocessed packets to the one or more virtual machines.
- The packet processing circuitry is configured to receive a first instruction to offload from the VMM preprocessing of the data packets in a specified flow in accordance with a specified rule, and responsively to the first instruction to initiate preprocessing the data packets in the specified flow by the packet processing circuitry in accordance with the specified rule while writing one or more initial data packets from the specified flow to a temporary buffer, and upon subsequently receiving a second instruction to enable the specified rule, to deliver the initial data packets from the temporary buffer, after preprocessing by the packet processing circuitry, directly to a virtual machine to which the specified flow is destined, and after delivering the preprocessed initial data packets, to continue preprocessing and delivering subsequent data packets in the specified flow to the virtual machine.
- In some embodiments, the first instruction causes the packet processing circuitry to modify headers of the data packets in the specified flow.
- In the disclosed embodiments, the packet processing circuitry is configured to deliver the initial and subsequent data packets to the virtual machine in accordance with an order in which the data packets were received from the network, such that the subsequent data packets are delivered to the virtual machine only after delivery to the virtual machine of all the data packets in the temporary buffer. In one embodiment, the packet processing circuitry is configured to write to the temporary buffer any of the subsequent data packets that are received from the network before the temporary buffer has been emptied.
- Additionally or alternatively, the packet processing circuitry in configured, in response to the first instruction, to verify that all of the data packets already received through the network interface in the specified flow have been passed to the VMM, and then to submit an acknowledgment to the VMM that the first instruction was received by the packet processing circuitry. In a disclosed embodiment, the VMM issues the second instruction upon receiving the acknowledgment.
- In some embodiments, the packet processing circuitry includes a transmit pipe, for processing outgoing packets for transmission to the network, and a receive pipe, for processing incoming data packets received from the network, and the packet processing circuitry is configured to deliver the initial preprocessed data packets from the temporary buffer to the virtual machine by loopback from the temporary buffer through the transmit pipe to the receive pipe, which writes the preprocessed data packets to another buffer in the memory that is assigned to the virtual machine.
- There is also provided, in accordance with an embodiment of the invention, a method for communication, which includes receiving in a network interface controller (NIC) over a network data packets in multiple packet flows destined for one or more virtual machines running on a host processor coupled to the NIC. The data packets are passed from the NIC to a virtual machine monitor (VMM) running on the host processor for preprocessing of the packets by the VMM, which delivers the preprocessed packets to the one or more virtual machines. The NIC receives a first instruction to offload from the VMM preprocessing of the data packets in a specified flow in accordance with a specified rule and responsively to the first instruction, initiates preprocessing the data packets in the specified flow by the NIC in accordance with the specified rule. After receiving the first instruction, the NIC writes one or more initial data packets from the NIC to a temporary buffer. Upon subsequently receiving a second instruction to enable the specified rule, the NIC delivers the initial data packets, after preprocessing by the NIC, from the temporary buffer directly to a virtual machine to which the specified flow is destined. After delivering the preprocessed initial data packets, the NIC continues to preprocess and deliver subsequent data packets in the specified flow to the virtual machine.
- The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
-
FIG. 1 is a block diagram that schematically illustrates a computer with a NIC, in accordance with an embodiment of the present invention; -
FIG. 2 is a ladder diagram that schematically illustrates a method for offload to a NIC of a processing function applied to a packet flow, in accordance with an embodiment of the invention; and -
FIG. 3 is a block diagram that schematically illustrates processing and delivery of packets to a destination virtual machine in accordance with the method shown inFIG. 2 . - In many practical applications in which virtual machines run on a server, packets transmitted between a network and the virtual machines are handled, as a default, by the VMM, even when the NIC has SR-IOV and/or other offload capabilities. The VMM preprocesses the packets by applying different rules to different packet flows, typically involving modifications to the headers of the data packets, and then delivers the preprocessed packets to the destination virtual machines. (Such header modifications include, for example, rewriting the packet headers themselves and/or adding or removing header fields to encapsulate and decapsulate packets that are tunneled through the network.) In some cases, however, such as heavy flows carrying large amounts of traffic, the VMM may decide to offload these preprocessing functions to the NIC and thus reduce the processing load on the host processor.
- A flow, in the context of the present description and the claims, refers to a sequence of packets transmitted from a given source to a specified destination. The flow can be identified, for example, by the Internet Protocol (IP) 5-tuple of header fields, comprising the source and destination addresses, source and destination ports, and protocol identifier. As another example, in InfiniBand™ networks, a flow can be identified by the queue pair (QP) number in the packet transport header.
- When the VMM decides to offload preprocessing of a given flow to the NIC, the virtual machine will subsequently receive and transmit packets directly via the NIC, without additional processing by the VMM. The transition from VMM-based to NIC-based preprocessing should ideally be transparent to the virtual machine and should take place without loss or delay of packets that have already been transmitted. Because of the high processing speed of the NIC, however, when the VMM initiates an offload in the middle of a given flow, the NIC may begin delivering preprocessed incoming packets in the flow to the virtual machine before the VMM has finished preprocessing and delivered the last of the packets that were received before the offload was initiated. The virtual machine will consequently receive packets out of order. It is possible to avoid this problem by instructing the sender of the incoming flow to pause transmission until the VMM has emptied its preprocessing queue, but this approach increases communication latency and degrades bandwidth. As another alternative, the virtual machines can be configured to handle out-of-order packets in software, but this solution similarly increases latency and adds to the load on the host processor.
- Embodiments of the present invention that are described herein address this problem by coordination between the VMM and the NIC, in a manner that is transparent both to the sender of the flow in question and to the virtual machine receiving the flow and avoids any degradation of communication bandwidth or latency. These embodiments use a new two-stage mechanism, in which the VMM first sends an instruction to the NIC to initiate preprocessing of the data packets in a specified flow in accordance with a specified rule. The NIC prepares to apply the rule and sends an acknowledgment to the VMM. Only after having emptied its own queue of incoming packets in the flow, however, does the VMM send a second instruction to the NIC to enable the rule, i.e., to begin passing preprocessed packets to the virtual machine to which the flow is destined.
- After receiving the first instruction, the NIC prepares to begin preprocessing the specified flow and temporarily buffers any incoming packets in the flow. After receiving the second instruction, the NIC first empties the temporary buffer and passes the buffered packets (after preprocessing) to the virtual machine. Once the temporary buffer is empty, the NIC continues preprocessing incoming packets in the specified flow, and delivers subsequent packets directly to the virtual machine. Thus, all packets are preprocessed and delivered to the virtual machine in the proper order.
-
FIG. 1 is a block diagram that schematically illustrates acomputer 20 with aNIC 28, in accordance with an embodiment of the present invention.Computer 20 comprises a host processor in the form of a central processing unit (CPU) 22, with amemory 24, typically comprising random-access memory (RAM). NIC 28 is connected toCPU 22 andmemory 24 via abus 26, such as a Peripheral Component Interconnect Express® (PCIe®) bus, as is known in the art. NIC 28couples computer 20 to apacket network 30, such as an Ethernet, IP or InfiniBand network. -
Computer 20 supports a virtual machine environment, in which multiple virtual machines 34 (labeled VM1, VM2, VM3 inFIG. 1 ) may run onCPU 22. The software running onCPU 22, including both operating system and application programs, may be downloaded to the CPU in electronic form, over a network for example. Additionally or alternatively, the software may be stored on tangible, non-transitory computer-readable media, such as optical, magnetic or electronic memory media, which may be embodied inmemory 24. -
CPU 22 operates anative domain 32, with ahost operating system 36, which may support host user applications and other native processes. In addition, the CPU concurrently runs one or morevirtual machines 34, as noted above, each with its own guest operating system and guest user applications (omitted for the sake of simplicity).VMM 38 innative domain 32 interacts with the kernels of the guest operating systems ofvirtual machines 34 in a manner that emulates the host processor and allows the virtual machines to share the resources ofCPU 22. A wide range of virtual machine software of this sort is available commercially, and further description is beyond the scope of the present disclosure. The added capabilities ofVMM 38, in terms of initiating and enabling offload of rules toNIC 28, are described further hereinbelow, particularly with reference toFIGS. 2 and 3 . -
NIC 28 comprises ahost interface 40, for connection toCPU 22 andmemory 24 viabus 26, and anetwork interface 42, comprising one or more ports connected tonetwork 30.Network interface 42 transmits and receives data packets in multiple packet flows from and tovirtual machines 34 running on theCPU 22. The packets are processed bypacket processing circuitry 44, which is coupled betweenhost interface 40 andnetwork interface 42 and comprises a receive (Rx)pipe 46, for processing incoming data packets received fromnetwork 30, and a transmit (Tx)pipe 48, for processing outgoing packets for transmission to the network. The description that follows relates primarily to preprocessing rules applied byRx pipe 46 to incoming flows that are received fromnetwork 30. WhenNIC 28 receives packets in incoming flows for whichVMM 38 has not offloaded preprocessing functions to the NIC,Rx pipe 46 passes the packets toVMM 38, which preprocesses the packets in accordance with applicable rules and delivers the preprocessed packets in each flow to the destinationvirtual machine 34. - In
Rx pipe 46, steeringlogic 50 identifies, for each incoming packet, the flow to which the packet belongs and the process running onCPU 22 to which the packet is to be delivered. In order to make this decision, steeringlogic 50 extracts a flow identifier from the packet, typically based on one or more packet header fields, such as the IP 5-tuple and/or a transport-layer value.Steering logic 50 looks up the flow in a database (not shown), which also indicates whether any preprocessing rules have been initiated and enabled onNIC 28 byVMM 38. If so, arule engine 52 preprocesses the packets in the flow, for example by modifying the packet headers (changing and/or removing or adding specified header fields). For flows for which preprocessing is not enabled, the incoming packets may bypassrule engine 52. - A
scatter engine 54 in receivepipe 46 then writes the packets to respective buffers inmemory 24 by direct memory access (DMA) overbus 26. For flows that are not preprocessed byrule engine 52,scatter engine 54 delivers the packets toVMM 38 for preprocessing and delivery to the appropriate destination. Whenrule engine 52 has preprocessed a packet in a particular flow,scatter engine 54 delivers the packet directly to the destinationvirtual machine 34 by writing the packet to adedicated buffer 56. Receivepipe 46 notifies the virtual machine that the packet is available for reading, for example by placing a completion report in a completion queue that is read by the virtual machine. - On the other hand, when receive
pipe 46 has received an instruction fromVMM 38 to initiate preprocessing the data packets in a specified flow, but has not yet received a second instruction to enable the specified preprocessing rule,scatter engine 54 may write one or more initial data packets from the specified flow to atemporary buffer 58, typically without preprocessing the packets.Buffer 58 may conveniently be allocated inmemory 24, as shown inFIG. 1 . In an alternative embodiment (not shown in the figures), NIC may holdbuffer 58 in a separate memory that is dedicated to the NIC. Upon subsequently receiving the instruction to enable preprocessing, receivepipe 46 will deliver the initial data packets, after appropriate preprocessing byrule engine 52, fromtemporary buffer 58 to the appropriatededicated buffer 56 of the destinationvirtual machine 34. Receivepipe 46 delivers the packets to buffer 56 in the order in which the packets were received fromnetwork 30. Thereafter,rule engine 52 will continue preprocessing further incoming packets in the specified flow, and scatterengine 54 will deliver these preprocessed packets in the proper order to buffer 56. - To ensure that proper packet handling and ordering are maintained, the initial data packets that were stored in
temporary buffer 58 can be delivered to destinationvirtual machine 34 by loopback fromtemporary buffer 58 through transmitpipe 48 to receivepipe 46, which then writes the preprocessed data packets to thededicated buffer 56 that is assigned to the virtual machine.Tx pipe 48 comprisesscheduling logic 60, which arbitrates among transmission requests and can be configured to give priority to loopback requests frombuffer 58. A gatherengine 62 reads the packets that are to be transmitted frommemory 24, andport selection logic 64 selects the port through which each packet is to be transmitted. Outgoing packets are transmitted vianetwork interface 42 tonetwork 30. - Loopback packets, however, including packets from
temporary buffer 58, are returned to steeringlogic 50 for delivery to the appropriate destination process.Steering logic 50 thus ensures that the packets that were written to buffer 58 from a given flow (prior to enablement ofrule engine 52 for the flow) are looped back to dedicated buffer in order, before preprocessing and writing any data packets received subsequently fromnetwork 30. These subsequent data packets are written todedicated buffer 56 only after delivery of all the data packets held intemporary buffer 58 for this flow. If any of these subsequent data packets are received fromnetwork 30 beforetemporary buffer 58 has been emptied (even if the rule for this flow has already been enabled), steeringlogic 50 will direct these packets, as well, totemporary buffer 58, in order to ensure that proper ordering is maintained in writing packets to buffer 56. - Reference is now made to
FIGS. 2 and 3 , which schematically illustrate a method for offload toNIC 28 of a preprocessing function applied to a certain packet flow, in accordance with an embodiment of the invention.FIG. 2 is a ladder diagram illustrating communications exchanged among the elements ofcomputer 20 in the carrying out this method, whileFIG. 3 is a block diagram showing stages in the processing and delivery of packets to destinationvirtual machine 34 in accordance with the method. Although this example relates to a single flow for the sake of simplicity, ingeneral NIC 28 receives and handles many flows concurrently, and may offload the preprocessing of multiple flows in parallel according to respective rules, which may differ from flow to flow. - Furthermore, although this example is described, for the sake of concreteness and clarity, with reference to the specific hardware architecture of
NIC 28 that is shown inFIG. 1 , the principles of the method ofFIGS. 2 and 3 may similarly be implemented by other suitable sorts of network interface devices, as are known in the art. All such alternative implementations are considered to be within the scope of the present invention. - Initially, as a default,
VMM 38 performs the required preprocessing of packets in the flow shown inFIGS. 2 and 3 . Therefore, upon receivingincoming packets 70,NIC 28 simply forwards correspondingpacket data 72 toVMM 38.VMM 38 preprocesses the packet in accordance with the applicable rule, and then delivers preprocessedpackets 74 to the destinationvirtual machine 34, for example by placing the packet data in the appropriatededicated buffer 56 inmemory 24. This default procedure is applied topackets # 1, #2 and #3 inFIGS. 2 and 3 . - After preprocessing
packet # 1, however,VMM 38 concludes that preprocessing of this flow should be offloaded toNIC 28. This decision can be based, for example, on a count or data rate of incoming packets in the flow, or on any other applicable criteria. Upon making the decision,VMM 38 sends an “update rule”instruction 76 to ruleengine 52 inNIC 28, instructing the rule engine to offload preprocessing of the data packets in this flow in accordance with a specified rule. In the meanwhile, untilinstruction 76 is received inNIC 28, steeringlogic 50 continues to direct packet data 72 (corresponding topackets # 2 and #3) toVMM 38, andVMM 38 continues to preprocess and deliverpackets 74 to buffer 56. - In response to
instruction 76,packet processing circuitry 44 verifies that all of the data packets already received throughnetwork interface 42 in this flow have been passed toVMM 38, and then submits anacknowledgment 78 to the VMM to confirm thatinstruction 76 was received. Following submission ofacknowledgment 78, steeringlogic 50 begins to directpacket data 82 fromincoming packets 70 totemporary buffer 58, as illustrated bypackets # 4 and #5.Steering logic 50 continues handling the flow in this manner untilVMM 38 has receivedacknowledgment 78 and, in response, sends a “rule enable”instruction 80 to ruleengine 52. - Upon receiving
instruction 80,packet processing circuitry 44 begins looping backpacket data 84 fromtemporary buffer 58, through transmitpipe 48, to steeringlogic 50.Steering logic 50 now passes the looped-back packets to ruleengine 52 for preprocessing in accordance with the rule specified byinstruction 76, and then directs corresponding preprocessedpackets 86 todedicated buffer 56 of the destinationvirtual machine 34.Steering logic 50 passessubsequent data packets 70 in the flow, such aspackets # 6 and #7, to ruleengine 52 only after delivery to the virtual machine of all the data packets belonging to this flow intemporary buffer 58. In the pictured example,packet # 6reaches NIC 28 fromnetwork 30 beforepacket # 5 has been emptied fromtemporary buffer 58. Therefore,packet # 6 is also written to and then looped back fromtemporary buffer 58 afterpacket # 5.Packet # 7 and subsequent packets in the flow, however, are preprocessed byrule engine 52 and written byscatter engine 54 directly todedicated buffer 56. - Thus, all packets in the flow are delivered to
dedicated buffer 56 in the proper order, without requiringvirtual machine 34 to be aware of the offload in mid-flow, and without exerting any back-pressure onnetwork 30. - It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/701,459 US10382350B2 (en) | 2017-09-12 | 2017-09-12 | Maintaining packet order in offload of packet processing functions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/701,459 US10382350B2 (en) | 2017-09-12 | 2017-09-12 | Maintaining packet order in offload of packet processing functions |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190081904A1 true US20190081904A1 (en) | 2019-03-14 |
US10382350B2 US10382350B2 (en) | 2019-08-13 |
Family
ID=65632191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/701,459 Active US10382350B2 (en) | 2017-09-12 | 2017-09-12 | Maintaining packet order in offload of packet processing functions |
Country Status (1)
Country | Link |
---|---|
US (1) | US10382350B2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200396320A1 (en) * | 2018-03-01 | 2020-12-17 | Huawei Technologies Co, Ltd. | Packet-programmable statelets |
US20210051118A1 (en) * | 2018-08-20 | 2021-02-18 | Huawei Technologies Co., Ltd. | Packet processing method and related device |
US10958627B2 (en) * | 2017-12-14 | 2021-03-23 | Mellanox Technologies, Ltd. | Offloading communication security operations to a network interface controller |
US20220045844A1 (en) * | 2020-08-05 | 2022-02-10 | Mellanox Technologies, Ltd. | Cryptographic Data Communication Apparatus |
US20230097439A1 (en) * | 2020-08-05 | 2023-03-30 | Mellanox Technologies, Ltd. | Cryptographic Data Communication Apparatus |
US20230412496A1 (en) * | 2022-06-21 | 2023-12-21 | Oracle International Corporation | Geometric based flow programming |
US11909656B1 (en) * | 2023-01-17 | 2024-02-20 | Nokia Solutions And Networks Oy | In-network decision for end-server-based network function acceleration |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11005771B2 (en) | 2017-10-16 | 2021-05-11 | Mellanox Technologies, Ltd. | Computational accelerator for packet payload operations |
US11502948B2 (en) | 2017-10-16 | 2022-11-15 | Mellanox Technologies, Ltd. | Computational accelerator for storage operations |
US10841243B2 (en) * | 2017-11-08 | 2020-11-17 | Mellanox Technologies, Ltd. | NIC with programmable pipeline |
US10824469B2 (en) | 2018-11-28 | 2020-11-03 | Mellanox Technologies, Ltd. | Reordering avoidance for flows during transition between slow-path handling and fast-path handling |
US11184439B2 (en) | 2019-04-01 | 2021-11-23 | Mellanox Technologies, Ltd. | Communication with accelerator via RDMA-based network adapter |
US11934658B2 (en) | 2021-03-25 | 2024-03-19 | Mellanox Technologies, Ltd. | Enhanced storage protocol emulation in a peripheral device |
US12007921B2 (en) | 2022-11-02 | 2024-06-11 | Mellanox Technologies, Ltd. | Programmable user-defined peripheral-bus device implementation using data-plane accelerator (DPA) |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5901496A (en) | 1996-12-20 | 1999-05-11 | American Cyanamid Company | Termiticide bait tube for in ground application |
US7600131B1 (en) | 1999-07-08 | 2009-10-06 | Broadcom Corporation | Distributed processing in a cryptography acceleration chip |
US9444785B2 (en) | 2000-06-23 | 2016-09-13 | Cloudshield Technologies, Inc. | Transparent provisioning of network access to an application |
US7269171B2 (en) | 2002-09-24 | 2007-09-11 | Sun Microsystems, Inc. | Multi-data receive processing according to a data communication protocol |
US20050102497A1 (en) | 2002-12-05 | 2005-05-12 | Buer Mark L. | Security processor mirroring |
US7290134B2 (en) | 2002-12-31 | 2007-10-30 | Broadcom Corporation | Encapsulation mechanism for packet processing |
US7783880B2 (en) | 2004-11-12 | 2010-08-24 | Microsoft Corporation | Method and apparatus for secure internet protocol (IPSEC) offloading with integrated host protocol stack management |
US7657659B1 (en) * | 2006-11-30 | 2010-02-02 | Vmware, Inc. | Partial copying of data to transmit buffer for virtual network device |
US8006297B2 (en) | 2007-04-25 | 2011-08-23 | Oracle America, Inc. | Method and system for combined security protocol and packet filter offload and onload |
US20090086736A1 (en) | 2007-09-28 | 2009-04-02 | Annie Foong | Notification of out of order packets |
US8103785B2 (en) | 2007-12-03 | 2012-01-24 | Seafire Micros, Inc. | Network acceleration techniques |
US8572251B2 (en) | 2008-11-26 | 2013-10-29 | Microsoft Corporation | Hardware acceleration for remote desktop protocol |
US20100228962A1 (en) | 2009-03-09 | 2010-09-09 | Microsoft Corporation | Offloading cryptographic protection processing |
EP2306322A1 (en) | 2009-09-30 | 2011-04-06 | Alcatel Lucent | Method for processing data packets in flow-aware network nodes |
EP2577936A2 (en) | 2010-05-28 | 2013-04-10 | Lawrence A. Laurich | Accelerator system for use with secure data storage |
WO2012011218A1 (en) | 2010-07-21 | 2012-01-26 | Nec Corporation | Computer system and offloading method in computer system |
US8996644B2 (en) | 2010-12-09 | 2015-03-31 | Solarflare Communications, Inc. | Encapsulated accelerator |
US9003053B2 (en) | 2011-09-22 | 2015-04-07 | Solarflare Communications, Inc. | Message acceleration |
US20130318269A1 (en) | 2012-05-22 | 2013-11-28 | Xockets IP, LLC | Processing structured and unstructured data using offload processors |
US8964554B2 (en) | 2012-06-07 | 2015-02-24 | Broadcom Corporation | Tunnel acceleration for wireless access points |
US10341263B2 (en) * | 2012-12-10 | 2019-07-02 | University Of Central Florida Research Foundation, Inc. | System and method for routing network frames between virtual machines |
US9008097B2 (en) * | 2012-12-31 | 2015-04-14 | Mellanox Technologies Ltd. | Network interface controller supporting network virtualization |
JP2015076643A (en) * | 2013-10-04 | 2015-04-20 | 富士通株式会社 | Control program, control device, and control method |
IL238690B (en) | 2015-05-07 | 2019-07-31 | Mellanox Technologies Ltd | Network-based computational accelerator |
US10152441B2 (en) | 2015-05-18 | 2018-12-11 | Mellanox Technologies, Ltd. | Host bus access by add-on devices via a network interface controller |
US20160378529A1 (en) * | 2015-06-29 | 2016-12-29 | Fortinet, Inc. | Utm integrated hypervisor for virtual machines |
US10318737B2 (en) * | 2016-06-30 | 2019-06-11 | Amazon Technologies, Inc. | Secure booting of virtualization managers |
WO2018023499A1 (en) * | 2016-08-03 | 2018-02-08 | 华为技术有限公司 | Network interface card, computer device and data packet processing method |
US10250496B2 (en) * | 2017-01-30 | 2019-04-02 | International Business Machines Corporation | Router based maximum transmission unit and data frame optimization for virtualized environments |
US10402341B2 (en) * | 2017-05-10 | 2019-09-03 | Red Hat Israel, Ltd. | Kernel-assisted inter-process data transfer |
-
2017
- 2017-09-12 US US15/701,459 patent/US10382350B2/en active Active
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10958627B2 (en) * | 2017-12-14 | 2021-03-23 | Mellanox Technologies, Ltd. | Offloading communication security operations to a network interface controller |
US12047477B2 (en) * | 2018-03-01 | 2024-07-23 | Huawei Technologies Co., Ltd. | Packet-programmable statelets |
US20200396320A1 (en) * | 2018-03-01 | 2020-12-17 | Huawei Technologies Co, Ltd. | Packet-programmable statelets |
US11616738B2 (en) * | 2018-08-20 | 2023-03-28 | Huawei Technologies Co., Ltd. | Packet processing method and related device |
US20210051118A1 (en) * | 2018-08-20 | 2021-02-18 | Huawei Technologies Co., Ltd. | Packet processing method and related device |
US20230097439A1 (en) * | 2020-08-05 | 2023-03-30 | Mellanox Technologies, Ltd. | Cryptographic Data Communication Apparatus |
US11558175B2 (en) * | 2020-08-05 | 2023-01-17 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US20230107406A1 (en) * | 2020-08-05 | 2023-04-06 | Mellanox Technologies, Ltd. | Cryptographic Data Communication Apparatus |
US11909856B2 (en) * | 2020-08-05 | 2024-02-20 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US11909855B2 (en) * | 2020-08-05 | 2024-02-20 | Mellanox Technologies, Ltd. | Cryptographic data communication apparatus |
US20220045844A1 (en) * | 2020-08-05 | 2022-02-10 | Mellanox Technologies, Ltd. | Cryptographic Data Communication Apparatus |
US20230412496A1 (en) * | 2022-06-21 | 2023-12-21 | Oracle International Corporation | Geometric based flow programming |
US11909656B1 (en) * | 2023-01-17 | 2024-02-20 | Nokia Solutions And Networks Oy | In-network decision for end-server-based network function acceleration |
Also Published As
Publication number | Publication date |
---|---|
US10382350B2 (en) | 2019-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10382350B2 (en) | Maintaining packet order in offload of packet processing functions | |
US10454991B2 (en) | NIC with switching functionality between network ports | |
US10567275B2 (en) | Network interface card switching for virtual networks | |
US10110518B2 (en) | Handling transport layer operations received out of order | |
US8514890B2 (en) | Method for switching traffic between virtual machines | |
US9460289B2 (en) | Securing a virtual environment | |
EP3503507B1 (en) | Network interface device | |
US11394664B2 (en) | Network interface device | |
US20240340197A1 (en) | Cross network bridging | |
US11593140B2 (en) | Smart network interface card for smart I/O | |
US20160266925A1 (en) | Data forwarding | |
US9160659B2 (en) | Paravirtualized IP over infiniband bridging | |
EP3563534B1 (en) | Transferring packets between virtual machines via a direct memory access device | |
CN115486045B (en) | Handling user traffic in virtualized networks | |
CN109983438B (en) | Use of Direct Memory Access (DMA) re-establishment mapping to accelerate paravirtualized network interfaces | |
US11669468B2 (en) | Interconnect module for smart I/O | |
US10541842B2 (en) | Methods and apparatus for enhancing virtual switch capabilities in a direct-access configured network interface card | |
Freitas et al. | A survey on accelerating technologies for fast network packet processing in Linux environments | |
US20200336573A1 (en) | Network switching with co-resident data-plane and network interface controllers | |
US9866657B2 (en) | Network switching with layer 2 switch coupled co-resident data-plane and network interface controllers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MELLANOX TECHNOLOGIES, LTD., ISRAEL Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOHRER, DROR;BLOCH, NOAM;NARKIS, LIOR;AND OTHERS;SIGNING DATES FROM 20170903 TO 20170909;REEL/FRAME:043549/0862 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |