US20170308394A1 - Networking stack of virtualization software configured to support latency sensitive virtual machines - Google Patents

Networking stack of virtualization software configured to support latency sensitive virtual machines Download PDF

Info

Publication number
US20170308394A1
US20170308394A1 US15/645,469 US201715645469A US2017308394A1 US 20170308394 A1 US20170308394 A1 US 20170308394A1 US 201715645469 A US201715645469 A US 201715645469A US 2017308394 A1 US2017308394 A1 US 2017308394A1
Authority
US
United States
Prior art keywords
data packet
packet
virtual machine
virtual
packets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/645,469
Other versions
US10860356B2 (en
Inventor
Haoqiang Zheng
Lenin SINGARAVELU
Shilpi Agarwal
Daniel Michael Hecht
Garrett Smith
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VMware LLC
Original Assignee
VMware LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VMware LLC filed Critical VMware LLC
Priority to US15/645,469 priority Critical patent/US10860356B2/en
Publication of US20170308394A1 publication Critical patent/US20170308394A1/en
Assigned to VMWARE, INC. reassignment VMWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SMITH, GARRETT, HECHT, DANIEL MICHAEL, AGARWAL, SHILPI, SINGARAVELU, LENIN, ZHENG, HAOQIANG
Application granted granted Critical
Publication of US10860356B2 publication Critical patent/US10860356B2/en
Assigned to VMware LLC reassignment VMware LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VMWARE, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/24Handling requests for interconnection or transfer for access to input/output bus using interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4887Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5033Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering data affinity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0894Packet rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/801Real time traffic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45591Monitoring or debugging support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Definitions

  • Latency sensitive applications are, typically, highly susceptible to execution delays and jitter (i.e., unpredictability) introduced by the computing environment in which these applications run.
  • latency sensitive applications include financial trading systems, which usually require split-second response time when performing functions such as pricing securities or executing and settling trades.
  • VNIC virtual network interface controller
  • PNIC physical network interface controller
  • VNIC-based communication requires transmitted and received packets to be processed by layers of networking software not required for packets that are directly transmitted and received over a PNIC.
  • data packets that are transmitted by a virtualized application are often transmitted first to a VNIC. Then, from the VNIC, the packets are passed to software modules executing in a hypervisor. Once the packets are processed by the hypervisor, they are then transmitted from the hypervisor to the PNIC of the host computer for subsequent delivery over the network.
  • a similar, although reverse, flow is employed for data packets that are to be received by the virtualized application. Each step in the flow entails processing of the data packets and, therefore, introduces latency.
  • VNICs are often configured to queue (or coalesce interrupts corresponding to) data packets before passing the packets to the hypervisor. While packet queueing minimizes the number of kernel calls to the hypervisor to transmit the packets, latency sensitive virtualized applications that require almost instantaneous packet transmission (such as, for example, telecommunications applications) suffer from having packets queued at a VNIC.
  • VNICs are also configured to consolidate inbound data packets using a scheme known as large receive offload (or LRO).
  • LRO large receive offload
  • TCP Transmission Control Protocol
  • TCP packets that are received at a VNIC are consolidated into larger TCP packets before being sent from the VNIC to the virtualized application. This results in fewer TCP acknowledgments being sent from the virtualized application to the transmitter of the TCP packets.
  • TCP packets can experience transmission delay.
  • a PNIC for a host computer may be configured to queue data packets that it receives. As is the case with the queuing of data packets at a VNIC, queuing data packets at a PNIC often introduces unacceptable delays for latency senstive virtualized applications.
  • a method of transmitting and receiving data packets to and from a container executing in a host computer is provided, the host computer having a plurality of containers executing therein, and where the host computer connects to a network through a physical NIC.
  • the method comprises the steps of detecting a packet handling interrupt upon receiving a first data packet that is associated with the container, and determining whether the container is latency sensitive.
  • the method further comprises the step of processing the packet handling interrupt if the container is latency sensitive.
  • the method further comprises, if the container is not latency sensitive, then queueing the first data packet and delaying processing of the packet handling interrupt.
  • Non-transitory computer-readable medium that includes instructions that, when executed, enable a host computer to implement one or more aspects of the above method, as well as a computing system that includes a host computer, a physical NIC, and a virtual NIC that is configured to implement one or more aspects of the above method.
  • FIG. 1 is a conceptual diagram depicting a virtualized computing environment in which one or more embodiments may be implemented.
  • FIG. 2 is a block diagram that depicts a table for storing latency sensitivity information, according to embodiments.
  • FIG. 3 is a conceptual diagram that illustrates disabling of data packet queuing in a VNIC, according to embodiments.
  • FIG. 4 is a conceptual diagram that illustrates disabling of LRO in a VNIC, according to embodiments.
  • FIG. 5 is a conceptual diagram that depicts changing the interrupt rate of a multi-queue PNIC, according to embodiments.
  • FIG. 6 is a flow diagram that illustrates a method for transmitting data packets by a VNIC for a latency sensitive virtual machine, according to embodiments.
  • FIG. 1 depicts a virtualized computing environment in which one or more embodiments may be implemented.
  • the computing environment includes a host computer 100 and a virtual machine (VM) management server 150 .
  • VM management server 150 communicates with host computer 100 over a local connection or, alternatively, over a remote network connection.
  • Host computer 100 is, in embodiments, a general-purpose computer that supports the execution of an operating system and one more application programs therein. In order to execute the various components that comprise a virtualized computing platform, host computer 100 is typically a server class computer. However, host computer 100 may also be a desktop or laptop computer.
  • execution space 120 supports the execution of user-level (i.e., non-kernel level) programs.
  • User-level programs are non-privileged, meaning that they cannot perform certain privileged functions, such as executing privileged instructions or accessing certain protected regions of system memory.
  • privileged functions such as executing privileged instructions or accessing certain protected regions of system memory.
  • programs that execution space 120 supports are virtual machines.
  • Virtual machines are software implementations of physical computing devices and execute programs much like a physical computer.
  • a virtual machine implements, in software, a computing platform that supports the execution of software applications under the control of a guest operating system (OS).
  • OS guest operating system
  • virtual machines typically emulate a particular computing architecture.
  • execution space 120 includes VMs 110 1 - 110 N .
  • Each VM 110 shown supports the execution of one or more applications 111 , each of which executes under the control of a particular guest OS 112 .
  • Applications 111 are user-level (non-kernel) programs, such as, for example, word processors or spreadsheet programs.
  • Each of the depicted guest OS' 112 may be one of the well-known commodity operating systems, such as any of the versions of the Windows® operating system from Microsoft Corp., the Linux® operating system, or MacOS® X from Apple, Inc. It should be noted that the applications and guest OS' may vary from one VM to another. Thus, applications 111 1 in VM 110 1 may include Microsoft's Word® and Excel® applications running under the control of Windows® 7 as guest OS 112 1 . By contrast, applications 111 N in VM 110 N may include the Safari® web browser running under the control of MacOS® X as guest OS 112 N . As shown in FIG. 1 , each of VMs 110 1 - 110 N communicates with a hypervisor component, referred to herein as hypervisor 130 .
  • hypervisor 130 a hypervisor component
  • Hypervisor 130 provides the operating system platform for running processes on computer host 100 .
  • Hypervisor 130 controls all hardware devices within computer host 100 and manages system resources for all applications running therein.
  • the core functions that hypervisor 130 provides are console services, file system services, device drivers, resource scheduling, and network data transmission.
  • hypervisor 130 implements software components that provide for the instantiation of one or more virtual machines on the host computer.
  • hypervisor 130 includes virtual machine monitors (VMMs) 131 1 - 131 N .
  • VMM 131 corresponds to an executing VM 110 .
  • VMM 131 1 corresponds to VM 110 1
  • VMM 131 2 corresponds to VM 110 2
  • Each VMM 131 is a software layer that provides a virtual hardware platform to the guest OS for the corresponding virtual machine. It is through a particular VMM 131 that a corresponding VM accesses services provided by the kernel component of hypervisor 130 (shown in FIG. 1 as kernel 136 ).
  • kernel 136 the kernel component of hypervisor 130
  • the functions carried out by kernel 136 are memory management, providing networking and storage stacks, and process scheduling.
  • Each VMM 131 in FIG. 1 implements a virtual hardware platform for the corresponding VM 110 .
  • the components of the implemented virtual hardware platform are one or more VNICs 125 .
  • VMM 131 1 implements VNIC 125 1
  • VMM 131 2 implements VNIC 125 2
  • Each VNIC 125 appears to be a physical network adapter (i.e., a physical network interface controller, or PNIC) from the standpoint of the applications 111 and the guest OS 112 that run in the corresponding VM 110 .
  • PNIC physical network interface controller
  • a virtualized guest operating system that runs within a virtual machine may transmit and receive data packets in the same way that an operating system that runs directly on a computer host (i.e., in a non-virtualized manner) transmits and receives data packets using PNICs.
  • each VNIC 125 is a source application from which it receives data packets that are to be transmitted over a network via one or more PNICs (which will be described in further detail below) of computer host 100 , or a destination application for data packets that are received over the network via a PNIC of computer host 100 .
  • hypervisor 130 may transmit data packets between virtual machines that execute on computer host 100 without transmitting those data packets over the network (i.e., via any of the PNICs of computer host 100 ).
  • kernel 136 serves as a liaison between VMs 110 and the physical hardware of computer host 100 .
  • Kernel 136 is a central operating system component, and executes directly on host 100 .
  • kernel 136 allocates memory, schedules access to physical CPUs, and manages access to physical hardware devices connected to computer host 100 .
  • kernel 136 implements a virtual switch 135 .
  • Virtual switch 135 enables virtual machines executing on computer host 100 to communicate with each other using the same protocols as physical switches.
  • Virtual switch 135 emulates a physical network switch by allowing virtual machines to connect to one or more ports (via the corresponding VNIC of the virtual machines), accepting frames of data (i.e., typically Ethernet frames) from the VNICs, and forwarding the frames to other VNICs connected to other ports of the virtual switch, or, alternatively, to a PNIC of computer host 100 .
  • virtual switch 135 is a software emulation of a physical switch operating at the data-link layer.
  • VNIC 125 1 and VNIC 125 N (which correspond to VMMs 131 1 and 131 N , respectively) connect to virtual switch 135 .
  • virtual switch 135 connects to PNIC driver 138 .
  • PNIC driver 138 is a device driver for a physical network adapter connected to computer host 100 .
  • PNIC driver 138 receives data from virtual switch 138 and transmits the received data over the network via a PNIC for which PNIC driver 138 serves as device driver.
  • PNIC driver 138 also handles incoming data from the PNIC and, among other things, forwards the received data to virtual machines via virtual switch 135 .
  • FIG. 1 also depicts hardware platform 140 , which is another component of computer host 100 .
  • Hardware platform 140 includes all physical devices, channels, and adapters of computer host 100 .
  • Hardware platform 140 includes network adapters (i.e., PNICs), for network communication, as well as host bus adapters (HBAs) (not shown), which enable communication to external storage devices.
  • network adapters i.e., PNICs
  • HBAs host bus adapters
  • hardware platform 140 includes the physical central processing units (CPUs) of computer host 100 .
  • Hardware platform 140 also includes a random access memory (RAM) 141 , which, among other things, stores programs currently in execution, as well as data required for such programs. Moreover, RAM 141 stores the various data structures needed to support network data communication. For instance, the various data components that comprise virtual switch 135 (i.e., virtual ports, routing tables, and the like) are stored in RAM 141 .
  • RAM random access memory
  • PNIC 142 is a computer hardware component that enables computer host 100 to connect to a computer network.
  • PNIC 142 implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard, such as Ethernet, Wi-Fi, or Token Ring.
  • PNIC 142 (which is driven by PNC driver 138 ) may use one or more techniques to indicate the availability of packets to transfer. For example, PNIC 142 may operate in a polling mode, where a CPU executes a program to examine the status of the PNIC. On the other hand, when PNIC 142 operates in an interrupt-driven mode, the PNIC alerts the CPU (via a generated interrupt) that it is ready to transfer data.
  • PNIC 142 is typically configured with one or more data queues.
  • the PNIC is configured with a single transmit queue (for transmitting outbound packets to the network) and a single receive queue (for receiving inbound packets from the network).
  • PNIC 142 may be a multi-queue PNIC.
  • a multi-queue PNIC has more than one transmit queue and more than one receive queue , where each transmit or receive queue can be allocated to a specific use.
  • a multi-queue PNIC 142 may be configured with two sets of transmit/receive queues.
  • a first transmit and a first receive queue may be connected to (i.e., driven by) a PNIC driver 138 connected to a first virtual switch, while a second transmit and a second receive queue is connected to a PNIC driver 138 connected to a second virtual switch.
  • a PNIC driver 138 connected to a first virtual switch
  • a second transmit and a second receive queue is connected to a PNIC driver 138 connected to a second virtual switch.
  • VM management server 150 is, in embodiments, a server application executing either within computer host 100 , or (as shown in FIG. 1 ) remotely from computer host 100 .
  • Embodiments of VM management server 150 provide an interface (such as a graphical user interface (or GUI)) through which a system administrator may define, configure, and deploy virtual machines for execution on one or more host computers.
  • GUI graphical user interface
  • VM management server 150 provides for the configuration of virtual machines as highly latency sensitive virtual machines.
  • VM management server 150 maintains a latency sensitivity table 155 , which defines latency sensitivity characteristics of virtual machines. Latency sensitivity table 155 is described in further detail below.
  • VM management server 150 communicates with computer host 100 , either through a direct local connection or over a computer network.
  • VM management agent 134 executes on computer host 100 .
  • VM management agent 134 is not part of kernel 136 , embodiments of the agent run at the hypervisor level within hypervisor 130 . However, in other embodiments, VM management agent 134 may run as a user program within execution space 120 .
  • VM management agent 134 receives instructions from VM management server 150 and carries out tasks on behalf of VM management server 150 .
  • the tasks performed by VM management agent 134 are the configuration and instantiation of virtual machines.
  • One aspect of the configuration of a virtual machine is whether that virtual machine is highly latency sensitive.
  • VM management agent 134 receives a copy of latency sensitivity table 155 and saves the underlying data within RAM 141 as latency sensitivity data 143 .
  • software modules associated with the transmission of data packets to and from virtual machines access that information in order to determine which virtual machines are highly latency sensitive.
  • networking software residing in either the VNIC or in the kernel regulates the transmission of data packets in support of virtual machines that are latency sensitive.
  • FIG. 2 is a block diagram that depicts one embodiment of latency sensitivity table 155 .
  • latency sensitivity table 155 stores multiple rows of data, where each row corresponds to a particular virtual machine within host 100 . Each virtual machine is identified on the host by a unique VM ID 210 .
  • a VM ID 210 may be any unique binary or alphanumeric value that is associated with a virtual machine.
  • latency sensitivity table 155 has a plurality of entries, each of which corresponds to a virtual machine VM 110 depicted in FIG. 1 .
  • latency sensitivity table 155 stores a latency sensitivity indicator.
  • This indicator may take on two distinct values (such as Y or N), which indicates whether the corresponding virtual machine is highly latency sensitive. In other embodiments, the latency sensitive indicator may take on more than two values (e.g., High, Medium, Low, or Normal), to provide for specifying different degrees of latency sensitivity for the corresponding virtual machine.
  • VM ID 210 1 (corresponding to VM 110 1 ) identifies a virtual machine that is not highly latency sensitive because its latency sensitivity indicator is set to N.
  • VM ID 210 2 (which corresponds to VM 110 2 ) identifies a virtual machine that is highly latency sensitive because its corresponding latency sensitivity indicator is set to Y.
  • VM 110 1 might be a virtual machine that runs a batch processing application (such as a monthly billing system), which typically does not require split-second response time and is generally unaffected by the jitter that may occur in a virtualized computing environment.
  • VM 110 2 may be a real-time financial trading application, which is a representative latency sensitive application.
  • a VM that is defined with latency sensitivity indicator of Y is treated by the networking software as highly latency sensitive. That is, the networking software in the VNIC and kernel is configured to determine which virtual machines are highly latency sensitive (based on the aforementioned criteria), and to transmit and receive data packets for those virtual machine in such a way so as to minimize any transmission delay for the packets.
  • the data packets transmitted and received by VM 110 2 are subjected to a minimal amount of delay (i.e., latency).
  • the data packets transmitted and received by VM 110 1 (which is not latency sensitive) are not transmitted in a way so as to minimize any delay in the delivery of the packets. Rather, the data packets of VM 110 1 are handled so as to improve the overall efficiency of execution of all virtual machines on computer host 100 , which may nonetheless result in delays in packet delivery for the VM.
  • FIG. 3 is a conceptual diagram that illustrates the disabling of data packet queuing in a VNIC of a highly latency sensitive virtual machine, according to one or more embodiments.
  • Data packet queuing also referred to as interrupt coalescing
  • a VNIC that performs packet queuing does not immediately transmit an interrupt upon receiving a data packet, whether from the corresponding guest virtual machine or from the hypervisor. Rather, the VNIC delays the posting of the interrupt until several packets have been received and queued therein.
  • the packets may be viewed as being queued within the VNIC itself in either a transmit queue or a receive queue.
  • the transmit queue for the VNIC queues packets that are transmitted by a process executing in the guest virtual machine that corresponds to the VNIC, and that are destined for another virtual machine executing on the same host, or, alternatively, to a network destination that is external to the host.
  • the receive queue queues packets that are transmitted by a process executing external to the virtual machine that corresponds to the VNIC, and that are destined for that virtual machine. It should be noted that packet queuing may occur, in embodiments, in the guest operating system (for packets to be transmitted from the virtual machine) and in the kernel (for packets to be received by the virtual machine).
  • Packet queuing reduces the interrupt rate at which the VNIC operates. That is, with packet queuing, the VNIC transmits fewer interrupts to the kernel for packets that are to be transmitted from the virtual machine.
  • Such an interrupt comprises, in one or more embodiments, a kernel call that informs the kernel that the VNIC has a certain number of data packets that are ready to be transmitted.
  • the VNIC transmits fewer interrupts to the guest virtual machine for packets that are to be received by the virtual machine.
  • Such an interrupt comprises, in one or more embodiments, a software interrupt that the VNIC posts to an interrupt handler that executes in the guest virtual machine, where the software interrupt informs the interrupt handler that one or more packets have been received at the VNIC.
  • the fewer interrupts generated by the VNIC when the VNIC queues data packets results in fewer context switches by the kernel and by the guest operating system.
  • packet queuing can add jitter, and in some cases, may have a noticeable impact on average latency, especially with input/output (I/O) bound applications.
  • VM 110 1 is a virtual machine that is not highly latency sensitive, while VM 110 2 is highly latency sensitive. This is the case based on the corresponding entries for these virtual machines in latency sensitivity table 155 , depicted in FIG. 2 .
  • VNIC 125 1 i.e., the VNIC that corresponds to VM 110 1
  • queues 301 and 302 which are depicted as residing within VNIC 125 1 .
  • queue 301 stores packets that are transmitted from VM 110 1 .
  • VNIC 125 1 These packets are queued (or, equivalently, interrupts are coalesced) in VNIC 125 1 until queue 301 becomes full, or, alternatively, when a timer (not shown) associated with queue 301 expires.
  • VNIC 125 1 determines that the packets stored in queue 301 are to be transmitted, VNIC 125 1 generates a software interrupt (or, in embodiments, makes a kernel call) to kernel 136 to inform the kernel that the VNIC has a certain number of packets that are ready to be transmitted.
  • queue 302 stores packets that are transmitted to VNIC 125 1 via kernel 136 .
  • a transmitter such as another virtual machine or an application external to computer host 100 , transmits data packets for delivery to VM 110 1 .
  • the packets are routed to computer host 100 , after which they are forwarded, by software executing in kernel 136 , to VNIC 125 1 .
  • VNIC 125 1 then queues the packets in queue 302 .
  • VNIC 125 1 then generates a software interrupt that is received by an interrupt handler executing under control of the guest operating system in VM 110 1 .
  • VNIC 125 1 generates the interrupt when, for example, the number of packets in queue 302 exceeds a threshold value or when the amount of time that packets are queued in queue 302 exceeds a threshold amount of time. It should be noted that, in the embodiment illustrated in FIG. 2 , queue 302 resides within VNIC 125 1 . However, in other embodiments, packets may be queued within one or more data buffers in kernel 136 .
  • kernel 136 when kernel 136 determines that the number of queued packets exceeds some threshold, or that the packets have been queued for an amount of time that exceeds a threshold time, kernel 136 then posts a software interrupt to VNIC 125 1 indicating that kernel 136 has a certain number of packets that are ready to be transmitted to VNIC 125 1 .
  • VM 110 2 is a highly latency sensitive virtual machine (based on the corresponding entry for VM 110 2 in latency sensitivity table 155 , depicted in FIG. 2 ).
  • the networking software in kernel 136 and VNIC 125 2 determines that VNIC 125 2 is associated with a highly latency sensitive virtual machine (i.e., VM 110 2 ) and dynamically disables packet queuing for VNIC 125 2 .
  • packets that arrive at VNIC 125 2 from VM 110 2 are not queued at VNIC 125 2 . Instead, as packets arrive from VM 110 2 , they are immediately transmitted to kernel 136 for delivery, either to another virtual machine or to an external network destination.
  • VNIC 125 2 posts an interrupt (or makes a kernel call) to kernel 136 for each packet that arrives from VM 110 2 .
  • the interrupt or kernel call indicates that VNIC 125 2 has a packet that is ready for transmission.
  • VNIC 125 2 posts a software interrupt to an interrupt handler executing in VM 110 2 to indicate that VNIC 125 2 has a packet that is ready to be transmitted to the virtual machine.
  • kernel 136 posts an interrupt to VNIC 125 2 , which then immediately receives and forwards the packet to VM 110 2 .
  • the interrupt rate for VNIC 125 2 is higher than that of VNIC 125 1 , which generally results in lower network latency for VNIC 125 2 as compared to VNIC 125 1 .
  • FIG. 4 is a conceptual diagram that illustrates the disabling of LRO in a VNIC of a highly latency sensitive virtual machine, according to embodiments.
  • LRO is a technique by which multiple incoming packets to a network interface (e.g., a physical NIC or a VNIC) are consolidated into a larger packet before being passed to higher layers of the networking stack. This has the effect of reducing the number of packets that require processing at the receiving end of a transmission.
  • LRO is typically performed at the transport layer (i.e., at the TCP layer in a TCP/IP-based network). That is, LRO entails the aggregation of smaller TCP packets into larger TCP packets before being transmitted up the network stack. Since the receipt of a TCP packet give rise to an acknowledgement by the recipient, the use of LRO entails fewer TCP acknowledgements than a scheme that does not use LRO.
  • VM 110 1 is not a highly latency sensitive virtual machine, while VM 110 2 is a highly latency sensitive virtual machine.
  • the latency sensitivity of each of the VMs in FIG. 4 is determined based upon the entries in latency sensitivity table 155 , depicted in FIG. 2 . Since VM 110 1 is not highly latency sensitive, then, in the embodiment shown, VNIC 125 1 (which corresponds to VM 110 1 ) performs LRO for received TCP packets.
  • TCP packet 401 1 is currently being transmitted from VNIC 125 1 to VM 110 1 . More specifically, TCP layer software in VNIC 125 1 communicates TCP packet 401 1 to transport layer software executing under control of the guest operating system of VM 110 1 . Further, TCP packet 401 2 is currently being assembled from received (smaller) TCP packets. Thus, when TCP packet 401 2 is fully formed, VNIC 125 1 will initiate transmission of this packet as well.
  • TCP is a reliable data delivery service
  • a TCP sender relies upon acknowledgements to determine whether a given TCP packet should be retransmitted.
  • acknowledgment 402 1 is sent from transport layer software in VM 110 1 to VNIC 125 1 , and on to kernel 136 .
  • kernel 136 transmits this acknowledgment toward the original sender of the packets consolidated in TCP packet 401 1 .
  • the acknowledgement is received, then the original sender of the packets consolidated in TCP packet 401 1 initiates a next packet transmission.
  • the frequency of the acknowledgments 402 1 from VM 110 1 is less than the frequency of packet transmission to VNIC 125 1 . This is due to the consolidation of smaller TCP packets into larger TCP packets 401 at VNIC 125 1 .
  • VM 110 2 is a highly latency sensitive virtual machine (based upon the entry corresponding to VM 110 1 in latency sensitivity table 155 , depicted in FIG. 2 ).
  • the transport layer software of VNIC 125 2 (which corresponds to VM 110 2 ) determines that VM 110 2 is highly latency sensitive and, based on this, disables LRO processing in the VNIC.
  • TCP packets that arrive at VNIC 125 2 from kernel 136 are not consolidated by the VNIC into larger TCP packets. Rather, the received TCP packets are immediately forwarded to VM 110 2 , where transport layer software executing therein processes the packets.
  • the transport layer software of VM 110 2 sends acknowledgements 402 2 to VNIC 125 2 , and on to kernel 136 .
  • acknowledgements 402 2 are forwarded by kernel 136 to the original sender of the TCP packets.
  • the frequency of acknowledgements 402 2 is greater than that of acknowledgements 402 1 because the TCP packets received by VNIC 125 2 are not consolidated. Therefore, the original sender of TCP packets to VNIC 125 2 will receive a greater number of acknowledgements on a more frequent basis. This tends to reduce network latency, as more frequent acknowledgements give rise to more frequent transmission and, therefore, lower transmission delay.
  • FIG. 5 is a conceptual diagram illustrating the adjustment of the interrupt rate in a multi-queue PNIC to accommodate a highly latency sensitive virtual machine, according to one or more embodiments.
  • PNICs are typically configured with one or more transmit and receive queues. When data packets are received at a PNIC, whether from the network that the PNIC connects to or from the operating system that manages the PNIC, the packets are placed into either the transmit queue (for outbound packets) or the receive queue (for inbound packets).
  • the PNIC is configured with a certain interrupt rate, whereby the PNIC generates interrupts to the host when it has packets that are ready to be received from the receive queue or transmitted from the transmit queue.
  • PNICs may queue data packets and have the packets transmitted (or received) once the queue length exceeds a threshold. At such time, an interrupt is generated and the packets are transmitted or received by the host, depending on the queue that the packets reside in.
  • Multi-queue PNICs are conceptually similar to single queue PNICs. Multi-queue PNICs have more than one transmit queue and more than one receive queue. This is advantageous because it increases the throughput of the PNIC, especially on multiprocessor computer hosts. Further, each transmit or receive queue may be dedicated to a single processor, thus dividing packet processing among processors and freeing certain other processors from the task of processing packets. Further, each transmit or receive queue in a multi-queue PNIC may be assigned to one or more VNICs. That is, multi-queue PNICs are often equipped with a routing module to direct packets destined for certain virtual machines into receive queues that correspond to the VNICs of those virtual machines. In similar fashion, the kernel directs network packets transmitted by certain virtual machines to transmit queues of the PNIC that correspond to those virtual machines.
  • interrupt rate for a multi-queue PNIC is configurable on a per-queue level. That is, each transmit or receive queue may be configured with its own interrupt rate.
  • FIG. 5 This scenario is illustrated in FIG. 5 .
  • VM 110 1 is not a highly latency sensitive virtual machine
  • VM 110 2 is a highly latency sensitive virtual machine.
  • Networking software in kernel 136 determines the latency sensitivity of each virtual machine based on corresponding entries for the virtual machines in latency sensitivity table 155 , as depicted in FIG. 2 .
  • PNIC 142 is a multi-queue PNIC with two transmit/receive queues: queue 501 1 and queue 501 2 .
  • each of queues 501 1 and 501 2 is configured to transmit and receive data packets.
  • queue 501 1 has been allocated to transmit and receive data packets for VM 110 1 .
  • the interrupt rate for queue 501 1 is not increased. Therefore, as shown in the figure, packets are accumulated in queue 501 1 until an interrupt is generated.
  • an interrupt is generated for queue 501 1 when the number of packets stored in the queue exceeds a threshold value, or when the packets have been stored in the queue beyond a threshold amount of time.
  • kernel 136 determines that queue 501 2 , which is allocated to VM 110 2 , is allocated to a highly latency sensitive virtual machine. Therefore, in the embodiment depicted, kernel 136 increases the interrupt rate for queue 501 2 . This has the effect of suppressing the queuing of data packets in the queue. Thus, when a data packet is placed in the transmit queue of queue 501 2 , an interrupt is immediately generated, which causes the packet to be transmitted without any further delay (i.e., without waiting for other packets to be placed in the transmit queue of queue 501 2 ).
  • a packet arrives at PNIC 142 and is destined for VM 1102 , the packet is routed to the receive queue of queue 501 2 , whereupon an interrupt is immediately generated, which causes kernel 136 to transmit the received packet to VM 110 2 without waiting for additional packets to be placed in the receive queue of 501 2 .
  • network latency for VM 110 2 is reduced as compared with the network latency experienced by VM 110 1 .
  • FIG. 6 is a flow diagram that depicts an embodiment of a method 600 for transmitting data packets by a VNIC, where the mode of packet transmission is based on the latency sensitivity of the virtual machine to which the VNIC corresponds.
  • method 600 is carried out by software that executes as part of the VNIC.
  • Method 600 begins at step 610 , where the VNIC receives a data packet.
  • the received data packet is received from a transmitting application executing under control of the guest operating system of the virtual machine to which the VNIC corresponds.
  • the data packet is received from the kernel, where the packet is to be transmitted to an application executing in the virtual machine.
  • step 620 software that executes as part of the VNIC determines whether the virtual machine to which the VNIC corresponds (which is either the source or destination of the packet) is highly latency sensitive.
  • the VNIC determines the latency sensitivity of the virtual machine by inspecting a memory-based data structure, such as latency sensitivity data 143 , which itself is based on latency sensitivity table 155 . According to these embodiments, if an entry for the virtual machine in latency sensitivity stores a latency sensitivity indicator that is set to Y (or some other value that indicates that the virtual machine is latency sensitive), then the VNIC determines that the corresponding virtual machine is highly latency sensitive. If, however, the latency sensitive indicator is not set to Y, then the VNIC determines that the virtual machine is not highly latency sensitive.
  • step 620 If, at step 620 , it is determined that the virtual machine is not highly latency sensitive, then method 600 proceeds to step 650 , where the received packet is queued with other packets received by the VNIC, as described below. If, however, it is determined that the virtual machine is highly latency sensitive, then method 600 proceeds to step 630 .
  • the VNIC determines the rate at which packets are currently being transmitted and/or received by the VNIC. According to embodiments, when the packet rate is high, queuing of data packets is allowed to take place, even for highly latency sensitive virtual machines. The reason is that virtual machines that have high packet rates do not generally suffer when packets are delayed by queuing. For these virtual machines, the system-wide benefits of queuing (i.e., fewer context switches due to a decreased interrupt rate) outweigh the extra packet delay that packet queuing causes.
  • step 650 the received packet is queued with other packets received by the VNIC. If the VNIC packet rate is determined to be low (i.e., that, over a predetermined time period, a small number of packets are transmitted to the VNIC), then method 600 proceeds to step 640 .
  • the VNIC determines the CPU utilization of the corresponding virtual machine.
  • the CPU utilization of a virtual machine i.e., the utilization of the virtual CPUs of the virtual machine
  • the virtual machine is often less likely to be compute-bound. That is, the virtual machine is less likely to be executing intensive computations (e.g., calculating prices of financial instruments in a high-speed trading system). Rather, the virtual machine is more likely to be I/O-bound. In other words, the virtual machine is most likely waiting for I/O operations to complete before engaging in computation. In such a scenario, it is important for the virtual machine to experience as little packet delay as possible.
  • packet delay is relatively unimportant in comparison to any delays in CPU processing, even for virtual machines that are determined to be highly latency sensitive.
  • step 640 if the VNIC determines that the corresponding virtual machine has low CPU utilization (i.e., that the virtual machine is not compute-bound), then method 600 proceeds to step 660 . Otherwise, if the VNIC determines that the virtual machine does not have low CPU utilization (i.e., that the virtual machine is in fact compute-bound), then method 600 proceeds to step 650 , where the received data packet is queued with other received data packets.
  • step 660 the VNIC immediately transmits the received data packet, thus minimizing packet delay (and eliminating any delay caused by packet queuing).
  • VNIC 125 2 which corresponds to VM 1102 ) does not queue any data packets therein.
  • the interrupt rate for the VNIC is higher than it would be if packets had been queued at the VNIC.
  • step 650 is executed when the data packet is received for a virtual machine that is not highly latency sensitive, or when the virtual machine is highly latency sensitive, but has a high packet rate or high CPU utilization.
  • the received data packet is queued with other data packets already received at the VNIC for later transmission.
  • data packets in the VNIC are queued in a transmit queue (for packets outbound from the corresponding virtual machine) or in a receive queue (for inbound packets).
  • the queuing of data packets is illustrated by VNIC 125 1 (which corresponds to non-highly latency sensitive virtual machine VM 110 1 ), depicted in FIG. 3 .
  • step 670 the VNIC determines whether a queuing threshold has been exceeded. For example, the VNIC may determine that either or both transmit and receive queues therein are full, or that the number of packets stored in the queues exceeds a predetermined value. In other embodiments, the VNIC determines that the packets have been stored in the queues for greater than some predetermined amount of time.
  • step 670 the VNIC determines that the queuing threshold has not been exceeded, then method 600 proceeds directly to step 690 . However, if the VNIC determines that the queuing threshold has been exceeded, then method 600 proceeds to step 680 .
  • step 680 the queued packets are transmitted by the VNIC. For example, if the queued data packets are to be received by an application executing in the virtual machine, then the VNIC posts a software interrupt to the virtual machine, indicating that the packets are ready to be received by the virtual machine.
  • the VNIC posts a software interrupt to the hypervisor (or, in some embodiments, the VNIC makes a kernel call to the hypervisor), indicating that the data packets are ready to be transmitted.
  • step 690 the VNIC determines whether more data packets should be received.
  • VNIC polls the virtual machine or the hypervisor to determine whether additional packets are available. The polling takes place at a predetermined interval.
  • the VNIC is enabled to receive a software interrupt from the virtual machine or the hypervisor indicating that additional data packets are ready to be received by the VNIC. If the VNIC determines that more data packets are to be received, then method 600 returns to step 610 to receive the data packet. Method 600 then cycles through the steps described above. If, however, the VNIC determines that there are no more data packets (or, alternatively, that the VNIC has been disabled for receiving data packets), then method 600 terminates.
  • Certain embodiments as described above involve a hardware abstraction layer on top of a host computer.
  • the hardware abstraction layer allows multiple containers to share the hardware resource. These containers, isolated from each other, have at least a user application running therein.
  • the hardware abstraction layer thus provides benefits of resource isolation and allocation among the containers.
  • virtual machines are used as an example for the containers and hypervisors as an example for the hardware abstraction layer.
  • each virtual machine includes a guest operating system in which at least one application runs.
  • OS-less containers see, e.g., www.docker.com).
  • OS-less containers implement operating system-level virtualization, wherein an abstraction layer is provided on top of the kernel of an operating system on a host computer.
  • the abstraction layer supports multiple OS-less containers, each including an application and its dependencies.
  • Each OS-less container runs as an isolated process in userspace on the host operating system and shares the kernel with other containers.
  • the OS-less container relies on the kernel's functionality to make use of resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces and to completely isolate the application's view of the operating environments.
  • resource isolation CPU, memory, block I/O, network, etc.
  • By using OS-less containers resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces.
  • Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
  • one or more embodiments of the disclosure also relate to a device or an apparatus for performing these operations.
  • the apparatus may be specially constructed for specific required purposes, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer.
  • various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • One or more embodiments of the present disclosure may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media.
  • the term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system-computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer.
  • Examples of a computer readable medium include a hard drive, network attached storage (NAS), read-only memory, random-access memory (e.g., a flash memory device), a CD (Compact Discs)—CD-ROM, a CD-R, or a CD-RW, a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices.
  • the computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Debugging And Monitoring (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Stored Programmes (AREA)

Abstract

A host computer has a plurality of containers including a first container executing therein, where the host also includes a physical network interface controller (NIC). A packet handling interrupt is detected upon receipt of a first data packet associated with the first container. If the first virtual machine is latency sensitive, then the packet handling interrupt is processed. If the first virtual machine is not latency sensitive, then the first data packet is queued and and processing of the packet handling interrupt is delayed.

Description

    CROSS REFERENCE TO OTHER APPLICATIONS
  • This application is a Continuation of U.S. patent application Ser. No. 14/468,181, filed Aug. 25, 2014, entitled “Networking Stack Of Virtualization Software Configured To Support Latency Sensitive Virtual Machines” which claims priority to U.S. Provisional Patent Application No. 61/870,143, entitled “Techniques To Support Highly Latency Sensitive VMs,” filed Aug. 26, 2013, the contents of which is incorporated herein by reference. This application is related to: U.S. patent application Ser. No. 14/468,121, entitled “CPU Scheduler Configured to Support Latency Sensitive Virtual Machines” (Attorney Docket No. B487.01), filed Aug. 25, 2014; now U.S. Pat. No. 9,262,198, issued Feb. 16, 2016, U.S. patent application Ser. No. 14/468,122, entitled “Virtual Machine Monitor Configured to Support Latency Sensitive Virtual Machines” (Attorney Docket No. B487.02), filed Aug. 25, 2014, now U.S. Pat. No. 9,317,318, issued Apr. 19, 2016; and U.S. patent application Ser. No. 14/468,138, entitled “Pass-through Network Interface Controller Configured to Support Latency Sensitive Virtual Machines” (Attorney Docket No. B487.04), filed Aug. 25, 2014, now U.S. Pat. No. 9,552,216, issued Jan. 24, 2017, the entire contents of which are incorporated herein by reference.
  • BACKGROUND
  • Applications characterized as “latency sensitive” are, typically, highly susceptible to execution delays and jitter (i.e., unpredictability) introduced by the computing environment in which these applications run. Examples of latency sensitive applications include financial trading systems, which usually require split-second response time when performing functions such as pricing securities or executing and settling trades.
  • Execution delay and jitter are often present in networked virtualized computing environments. Such computing environments frequently include a number of virtual machines (VMs) that execute one or more applications that rely on network communications. These virtualized applications communicate over the network by transmitting data packets to other nodes on the network using a virtual network interface controller (or VNIC) of the VM, which is a software emulation of a physical network interface controller (or PNIC). The use of a VNIC for network communication results in latency and jitter for a number of reasons.
  • First, VNIC-based communication requires transmitted and received packets to be processed by layers of networking software not required for packets that are directly transmitted and received over a PNIC. For example, data packets that are transmitted by a virtualized application are often transmitted first to a VNIC. Then, from the VNIC, the packets are passed to software modules executing in a hypervisor. Once the packets are processed by the hypervisor, they are then transmitted from the hypervisor to the PNIC of the host computer for subsequent delivery over the network. A similar, although reverse, flow is employed for data packets that are to be received by the virtualized application. Each step in the flow entails processing of the data packets and, therefore, introduces latency.
  • Further, VNICs are often configured to queue (or coalesce interrupts corresponding to) data packets before passing the packets to the hypervisor. While packet queueing minimizes the number of kernel calls to the hypervisor to transmit the packets, latency sensitive virtualized applications that require almost instantaneous packet transmission (such as, for example, telecommunications applications) suffer from having packets queued at a VNIC.
  • VNICs are also configured to consolidate inbound data packets using a scheme known as large receive offload (or LRO). Using LRO, smaller Transmission Control Protocol (TCP) packets that are received at a VNIC are consolidated into larger TCP packets before being sent from the VNIC to the virtualized application. This results in fewer TCP acknowledgments being sent from the virtualized application to the transmitter of the TCP packets. Thus, TCP packets can experience transmission delay.
  • Finally, a PNIC for a host computer may be configured to queue data packets that it receives. As is the case with the queuing of data packets at a VNIC, queuing data packets at a PNIC often introduces unacceptable delays for latency senstive virtualized applications.
  • SUMMARY
  • A method of transmitting and receiving data packets to and from a container executing in a host computer is provided, the host computer having a plurality of containers executing therein, and where the host computer connects to a network through a physical NIC. The method comprises the steps of detecting a packet handling interrupt upon receiving a first data packet that is associated with the container, and determining whether the container is latency sensitive. The method further comprises the step of processing the packet handling interrupt if the container is latency sensitive. The method further comprises, if the container is not latency sensitive, then queueing the first data packet and delaying processing of the packet handling interrupt.
  • Further embodiments provide a non-transitory computer-readable medium that includes instructions that, when executed, enable a host computer to implement one or more aspects of the above method, as well as a computing system that includes a host computer, a physical NIC, and a virtual NIC that is configured to implement one or more aspects of the above method.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a conceptual diagram depicting a virtualized computing environment in which one or more embodiments may be implemented.
  • FIG. 2 is a block diagram that depicts a table for storing latency sensitivity information, according to embodiments.
  • FIG. 3 is a conceptual diagram that illustrates disabling of data packet queuing in a VNIC, according to embodiments.
  • FIG. 4 is a conceptual diagram that illustrates disabling of LRO in a VNIC, according to embodiments.
  • FIG. 5 is a conceptual diagram that depicts changing the interrupt rate of a multi-queue PNIC, according to embodiments.
  • FIG. 6 is a flow diagram that illustrates a method for transmitting data packets by a VNIC for a latency sensitive virtual machine, according to embodiments.
  • DETAILED DESCRIPTION
  • FIG. 1 depicts a virtualized computing environment in which one or more embodiments may be implemented. As shown, the computing environment includes a host computer 100 and a virtual machine (VM) management server 150. VM management server 150 communicates with host computer 100 over a local connection or, alternatively, over a remote network connection.
  • Host computer 100 is, in embodiments, a general-purpose computer that supports the execution of an operating system and one more application programs therein. In order to execute the various components that comprise a virtualized computing platform, host computer 100 is typically a server class computer. However, host computer 100 may also be a desktop or laptop computer.
  • As shown in FIG. 1, host computer 100 is logically divided into three components. First, execution space 120 supports the execution of user-level (i.e., non-kernel level) programs. User-level programs are non-privileged, meaning that they cannot perform certain privileged functions, such as executing privileged instructions or accessing certain protected regions of system memory. Among the programs that execution space 120 supports are virtual machines.
  • Virtual machines are software implementations of physical computing devices and execute programs much like a physical computer. In embodiments, a virtual machine implements, in software, a computing platform that supports the execution of software applications under the control of a guest operating system (OS). As such, virtual machines typically emulate a particular computing architecture. In FIG. 1, execution space 120 includes VMs 110 1-110 N. Each VM 110 shown supports the execution of one or more applications 111, each of which executes under the control of a particular guest OS 112. Applications 111 are user-level (non-kernel) programs, such as, for example, word processors or spreadsheet programs. Each of the depicted guest OS' 112 may be one of the well-known commodity operating systems, such as any of the versions of the Windows® operating system from Microsoft Corp., the Linux® operating system, or MacOS® X from Apple, Inc. It should be noted that the applications and guest OS' may vary from one VM to another. Thus, applications 111 1 in VM 110 1 may include Microsoft's Word® and Excel® applications running under the control of Windows® 7 as guest OS 112 1. By contrast, applications 111 N in VM 110 N may include the Safari® web browser running under the control of MacOS® X as guest OS 112 N. As shown in FIG. 1, each of VMs 110 1-110 N communicates with a hypervisor component, referred to herein as hypervisor 130.
  • Hypervisor 130, as depicted in FIG. 1, provides the operating system platform for running processes on computer host 100. Hypervisor 130 controls all hardware devices within computer host 100 and manages system resources for all applications running therein. Among the core functions that hypervisor 130 provides are console services, file system services, device drivers, resource scheduling, and network data transmission. Further, hypervisor 130 implements software components that provide for the instantiation of one or more virtual machines on the host computer.
  • As depicted in the embodiment of FIG. 1, hypervisor 130 includes virtual machine monitors (VMMs) 131 1-131 N. Each VMM 131 corresponds to an executing VM 110. Thus, VMM 131 1 corresponds to VM 110 1, VMM 131 2 corresponds to VM 110 2, and so on. Each VMM 131 is a software layer that provides a virtual hardware platform to the guest OS for the corresponding virtual machine. It is through a particular VMM 131 that a corresponding VM accesses services provided by the kernel component of hypervisor 130 (shown in FIG. 1 as kernel 136). Among the functions carried out by kernel 136 are memory management, providing networking and storage stacks, and process scheduling.
  • Each VMM 131 in FIG. 1 implements a virtual hardware platform for the corresponding VM 110. Among the components of the implemented virtual hardware platform are one or more VNICs 125. Thus, VMM 131 1 implements VNIC 125 1, VMM 131 2 implements VNIC 125 2, and so on. Each VNIC 125 appears to be a physical network adapter (i.e., a physical network interface controller, or PNIC) from the standpoint of the applications 111 and the guest OS 112 that run in the corresponding VM 110. In this way, a virtualized guest operating system that runs within a virtual machine may transmit and receive data packets in the same way that an operating system that runs directly on a computer host (i.e., in a non-virtualized manner) transmits and receives data packets using PNICs. However, from the standpoint of hypervisor 130 (which, in typical embodiments, executes directly on computer host 100), each VNIC 125 is a source application from which it receives data packets that are to be transmitted over a network via one or more PNICs (which will be described in further detail below) of computer host 100, or a destination application for data packets that are received over the network via a PNIC of computer host 100. Alternatively, hypervisor 130 may transmit data packets between virtual machines that execute on computer host 100 without transmitting those data packets over the network (i.e., via any of the PNICs of computer host 100).
  • In one or more embodiments, kernel 136 serves as a liaison between VMs 110 and the physical hardware of computer host 100. Kernel 136 is a central operating system component, and executes directly on host 100. In embodiments, kernel 136 allocates memory, schedules access to physical CPUs, and manages access to physical hardware devices connected to computer host 100.
  • As shown in FIG. 1, kernel 136 implements a virtual switch 135. Virtual switch 135 enables virtual machines executing on computer host 100 to communicate with each other using the same protocols as physical switches. Virtual switch 135 emulates a physical network switch by allowing virtual machines to connect to one or more ports (via the corresponding VNIC of the virtual machines), accepting frames of data (i.e., typically Ethernet frames) from the VNICs, and forwarding the frames to other VNICs connected to other ports of the virtual switch, or, alternatively, to a PNIC of computer host 100. Thus, virtual switch 135 is a software emulation of a physical switch operating at the data-link layer.
  • As shown in FIG. 1, VNIC 125 1 and VNIC 125 N (which correspond to VMMs 131 1 and 131 N, respectively) connect to virtual switch 135. Further, virtual switch 135 connects to PNIC driver 138. According to embodiments, PNIC driver 138 is a device driver for a physical network adapter connected to computer host 100. PNIC driver 138 receives data from virtual switch 138 and transmits the received data over the network via a PNIC for which PNIC driver 138 serves as device driver. PNIC driver 138 also handles incoming data from the PNIC and, among other things, forwards the received data to virtual machines via virtual switch 135.
  • FIG. 1 also depicts hardware platform 140, which is another component of computer host 100. Hardware platform 140 includes all physical devices, channels, and adapters of computer host 100. Hardware platform 140 includes network adapters (i.e., PNICs), for network communication, as well as host bus adapters (HBAs) (not shown), which enable communication to external storage devices. In addition, hardware platform 140 includes the physical central processing units (CPUs) of computer host 100.
  • Hardware platform 140 also includes a random access memory (RAM) 141, which, among other things, stores programs currently in execution, as well as data required for such programs. Moreover, RAM 141 stores the various data structures needed to support network data communication. For instance, the various data components that comprise virtual switch 135 (i.e., virtual ports, routing tables, and the like) are stored in RAM 141.
  • Further, as shown in FIG. 1, hardware platform also includes PNIC 142. PNIC 142 is a computer hardware component that enables computer host 100 to connect to a computer network. PNIC 142 implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard, such as Ethernet, Wi-Fi, or Token Ring. PNIC 142 (which is driven by PNC driver 138) may use one or more techniques to indicate the availability of packets to transfer. For example, PNIC 142 may operate in a polling mode, where a CPU executes a program to examine the status of the PNIC. On the other hand, when PNIC 142 operates in an interrupt-driven mode, the PNIC alerts the CPU (via a generated interrupt) that it is ready to transfer data.
  • PNIC 142 is typically configured with one or more data queues. In some cases, the PNIC is configured with a single transmit queue (for transmitting outbound packets to the network) and a single receive queue (for receiving inbound packets from the network). Alternatively, PNIC 142 may be a multi-queue PNIC. A multi-queue PNIC has more than one transmit queue and more than one receive queue , where each transmit or receive queue can be allocated to a specific use. For example, a multi-queue PNIC 142 may be configured with two sets of transmit/receive queues. In this embodiment, a first transmit and a first receive queue may be connected to (i.e., driven by) a PNIC driver 138 connected to a first virtual switch, while a second transmit and a second receive queue is connected to a PNIC driver 138 connected to a second virtual switch. Thus, data packets transmitted by an external source for delivery to a virtual machine connected to the first virtual switch are placed (by PNIC 142) in the first /receive queue. By contrast, data packets received by PNIC 142 that are destined for a virtual machine connected to the second virtual switch are placed by PNIC 142 in the second receive queue.
  • In order to support the networking changes required for executing latency sensitive virtual machines, the embodiment depicted in FIG. 1 includes a VM management server 150. VM management server 150 is, in embodiments, a server application executing either within computer host 100, or (as shown in FIG. 1) remotely from computer host 100. Embodiments of VM management server 150 provide an interface (such as a graphical user interface (or GUI)) through which a system administrator may define, configure, and deploy virtual machines for execution on one or more host computers.
  • In addition, VM management server 150 provides for the configuration of virtual machines as highly latency sensitive virtual machines. According to one or more embodiments, VM management server 150 maintains a latency sensitivity table 155, which defines latency sensitivity characteristics of virtual machines. Latency sensitivity table 155 is described in further detail below.
  • As shown in FIG. 1, VM management server 150 communicates with computer host 100, either through a direct local connection or over a computer network. In order to facilitate such communication, VM management agent 134 executes on computer host 100. Although VM management agent 134 is not part of kernel 136, embodiments of the agent run at the hypervisor level within hypervisor 130. However, in other embodiments, VM management agent 134 may run as a user program within execution space 120.
  • VM management agent 134 receives instructions from VM management server 150 and carries out tasks on behalf of VM management server 150. Among the tasks performed by VM management agent 134 are the configuration and instantiation of virtual machines. One aspect of the configuration of a virtual machine is whether that virtual machine is highly latency sensitive. Thus, VM management agent 134 receives a copy of latency sensitivity table 155 and saves the underlying data within RAM 141 as latency sensitivity data 143. As shown in FIG. 1, once latency sensitivity data 141 is saved to RAM, software modules associated with the transmission of data packets to and from virtual machines access that information in order to determine which virtual machines are highly latency sensitive. Upon determining that one or more virtual machines are highly latency sensitive, networking software (residing in either the VNIC or in the kernel) regulates the transmission of data packets in support of virtual machines that are latency sensitive.
  • FIG. 2 is a block diagram that depicts one embodiment of latency sensitivity table 155. As shown in the figure, latency sensitivity table 155 stores multiple rows of data, where each row corresponds to a particular virtual machine within host 100. Each virtual machine is identified on the host by a unique VM ID 210. A VM ID 210 may be any unique binary or alphanumeric value that is associated with a virtual machine. As shown in FIG. 2, latency sensitivity table 155 has a plurality of entries, each of which corresponds to a virtual machine VM 110 depicted in FIG. 1.
  • For each VM ID 210, latency sensitivity table 155 stores a latency sensitivity indicator. This indicator may take on two distinct values (such as Y or N), which indicates whether the corresponding virtual machine is highly latency sensitive. In other embodiments, the latency sensitive indicator may take on more than two values (e.g., High, Medium, Low, or Normal), to provide for specifying different degrees of latency sensitivity for the corresponding virtual machine. In FIG. 2, VM ID 210 1 (corresponding to VM 110 1) identifies a virtual machine that is not highly latency sensitive because its latency sensitivity indicator is set to N. On the other hand, VM ID 210 2 (which corresponds to VM 110 2) identifies a virtual machine that is highly latency sensitive because its corresponding latency sensitivity indicator is set to Y. For example, VM 110 1 might be a virtual machine that runs a batch processing application (such as a monthly billing system), which typically does not require split-second response time and is generally unaffected by the jitter that may occur in a virtualized computing environment. On the other hand, VM 110 2 may be a real-time financial trading application, which is a representative latency sensitive application.
  • According to embodiments, a VM that is defined with latency sensitivity indicator of Y (or some other positive indicator) is treated by the networking software as highly latency sensitive. That is, the networking software in the VNIC and kernel is configured to determine which virtual machines are highly latency sensitive (based on the aforementioned criteria), and to transmit and receive data packets for those virtual machine in such a way so as to minimize any transmission delay for the packets. Thus, the data packets transmitted and received by VM 110 2 (a highly latency sensitive virtual machine) are subjected to a minimal amount of delay (i.e., latency). By contrast, the data packets transmitted and received by VM 110 1 (which is not latency sensitive) are not transmitted in a way so as to minimize any delay in the delivery of the packets. Rather, the data packets of VM 110 1 are handled so as to improve the overall efficiency of execution of all virtual machines on computer host 100, which may nonetheless result in delays in packet delivery for the VM.
  • FIG. 3 is a conceptual diagram that illustrates the disabling of data packet queuing in a VNIC of a highly latency sensitive virtual machine, according to one or more embodiments. Data packet queuing (also referred to as interrupt coalescing) entails a delay in transmission of an interrupt from a physical or virtual network interface (such as a VNIC) until a predetermined number of data packets have been received by the network interface. Thus, a VNIC that performs packet queuing does not immediately transmit an interrupt upon receiving a data packet, whether from the corresponding guest virtual machine or from the hypervisor. Rather, the VNIC delays the posting of the interrupt until several packets have been received and queued therein.
  • Conceptually, the packets may be viewed as being queued within the VNIC itself in either a transmit queue or a receive queue. The transmit queue for the VNIC queues packets that are transmitted by a process executing in the guest virtual machine that corresponds to the VNIC, and that are destined for another virtual machine executing on the same host, or, alternatively, to a network destination that is external to the host. The receive queue, on the other hand, queues packets that are transmitted by a process executing external to the virtual machine that corresponds to the VNIC, and that are destined for that virtual machine. It should be noted that packet queuing may occur, in embodiments, in the guest operating system (for packets to be transmitted from the virtual machine) and in the kernel (for packets to be received by the virtual machine).
  • Packet queuing reduces the interrupt rate at which the VNIC operates. That is, with packet queuing, the VNIC transmits fewer interrupts to the kernel for packets that are to be transmitted from the virtual machine. Such an interrupt comprises, in one or more embodiments, a kernel call that informs the kernel that the VNIC has a certain number of data packets that are ready to be transmitted. Similarly, with packet queuing, the VNIC transmits fewer interrupts to the guest virtual machine for packets that are to be received by the virtual machine. Such an interrupt comprises, in one or more embodiments, a software interrupt that the VNIC posts to an interrupt handler that executes in the guest virtual machine, where the software interrupt informs the interrupt handler that one or more packets have been received at the VNIC. The fewer interrupts generated by the VNIC when the VNIC queues data packets results in fewer context switches by the kernel and by the guest operating system. However, packet queuing can add jitter, and in some cases, may have a noticeable impact on average latency, especially with input/output (I/O) bound applications.
  • In FIG. 3, VM 110 1 is a virtual machine that is not highly latency sensitive, while VM 110 2 is highly latency sensitive. This is the case based on the corresponding entries for these virtual machines in latency sensitivity table 155, depicted in FIG. 2. Because VM 110 1 is not highly latency sensitive, VNIC 125 1 (i.e., the VNIC that corresponds to VM 110 1) performs packet queuing for the VM. This is depicted by queues 301 and 302, which are depicted as residing within VNIC 125 1. As shown, queue 301 stores packets that are transmitted from VM 110 1. These packets are queued (or, equivalently, interrupts are coalesced) in VNIC 125 1 until queue 301 becomes full, or, alternatively, when a timer (not shown) associated with queue 301 expires. Once VNIC 125 1 determines that the packets stored in queue 301 are to be transmitted, VNIC 125 1 generates a software interrupt (or, in embodiments, makes a kernel call) to kernel 136 to inform the kernel that the VNIC has a certain number of packets that are ready to be transmitted.
  • Similarly, queue 302 stores packets that are transmitted to VNIC 125 1 via kernel 136. In embodiments, a transmitter, such as another virtual machine or an application external to computer host 100, transmits data packets for delivery to VM 110 1. The packets are routed to computer host 100, after which they are forwarded, by software executing in kernel 136, to VNIC 125 1. VNIC 125 1 then queues the packets in queue 302. VNIC 125 1 then generates a software interrupt that is received by an interrupt handler executing under control of the guest operating system in VM 110 1. VNIC 125 1 generates the interrupt when, for example, the number of packets in queue 302 exceeds a threshold value or when the amount of time that packets are queued in queue 302 exceeds a threshold amount of time. It should be noted that, in the embodiment illustrated in FIG. 2, queue 302 resides within VNIC 125 1. However, in other embodiments, packets may be queued within one or more data buffers in kernel 136. In such embodiments, when kernel 136 determines that the number of queued packets exceeds some threshold, or that the packets have been queued for an amount of time that exceeds a threshold time, kernel 136 then posts a software interrupt to VNIC 125 1 indicating that kernel 136 has a certain number of packets that are ready to be transmitted to VNIC 125 1.
  • In contrast with VM 110 1, VM 110 2 is a highly latency sensitive virtual machine (based on the corresponding entry for VM 110 2 in latency sensitivity table 155, depicted in FIG. 2). As previously mentioned, the networking software in kernel 136 and VNIC 125 2 determines that VNIC 125 2 is associated with a highly latency sensitive virtual machine (i.e., VM 110 2) and dynamically disables packet queuing for VNIC 125 2. Thus, as shown in FIG. 3, packets that arrive at VNIC 125 2 from VM 110 2 are not queued at VNIC 125 2. Instead, as packets arrive from VM 110 2, they are immediately transmitted to kernel 136 for delivery, either to another virtual machine or to an external network destination. In one or more embodiments, VNIC 125 2 posts an interrupt (or makes a kernel call) to kernel 136 for each packet that arrives from VM 110 2. The interrupt or kernel call indicates that VNIC 125 2 has a packet that is ready for transmission.
  • As shown in FIG. 3, packets that are received by VNIC 125 2 from kernel 136 are also not queued at VNIC 125 2. Rather, these packets are immediately transmitted, without delay, to VM 110 2. For example, in one or more embodiments, VNIC 125 2 posts a software interrupt to an interrupt handler executing in VM 110 2 to indicate that VNIC 125 2 has a packet that is ready to be transmitted to the virtual machine. In other embodiments, kernel 136 posts an interrupt to VNIC 125 2, which then immediately receives and forwards the packet to VM 110 2. Thus, the interrupt rate for VNIC 125 2, for both transmitted and received packets, is higher than that of VNIC 125 1, which generally results in lower network latency for VNIC 125 2 as compared to VNIC 125 1.
  • FIG. 4 is a conceptual diagram that illustrates the disabling of LRO in a VNIC of a highly latency sensitive virtual machine, according to embodiments. LRO is a technique by which multiple incoming packets to a network interface (e.g., a physical NIC or a VNIC) are consolidated into a larger packet before being passed to higher layers of the networking stack. This has the effect of reducing the number of packets that require processing at the receiving end of a transmission. LRO is typically performed at the transport layer (i.e., at the TCP layer in a TCP/IP-based network). That is, LRO entails the aggregation of smaller TCP packets into larger TCP packets before being transmitted up the network stack. Since the receipt of a TCP packet give rise to an acknowledgement by the recipient, the use of LRO entails fewer TCP acknowledgements than a scheme that does not use LRO.
  • As shown in FIG. 4, VM 110 1 is not a highly latency sensitive virtual machine, while VM 110 2 is a highly latency sensitive virtual machine. As was the case for VMs 110 1 and 110 2 in FIG. 3, the latency sensitivity of each of the VMs in FIG. 4 is determined based upon the entries in latency sensitivity table 155, depicted in FIG. 2. Since VM 110 1 is not highly latency sensitive, then, in the embodiment shown, VNIC 125 1 (which corresponds to VM 110 1) performs LRO for received TCP packets. That is, the transport layer software in kernel 136 forwards TCP packets to VNIC 125 1, where those packets are then consolidated into larger TCP packets before being transmitted to VM 110 1. Thus, as shown in the figure, TCP packet 401 1 is currently being transmitted from VNIC 125 1 to VM 110 1. More specifically, TCP layer software in VNIC 125 1 communicates TCP packet 401 1 to transport layer software executing under control of the guest operating system of VM 110 1. Further, TCP packet 401 2 is currently being assembled from received (smaller) TCP packets. Thus, when TCP packet 401 2 is fully formed, VNIC 125 1 will initiate transmission of this packet as well.
  • Since TCP is a reliable data delivery service, a TCP sender relies upon acknowledgements to determine whether a given TCP packet should be retransmitted. Thus, as shown in FIG. 2, upon receipt of TCP packet 401 1, and acknowledgment 402 1 is sent from transport layer software in VM 110 1 to VNIC 125 1, and on to kernel 136. It should be noted that kernel 136 then transmits this acknowledgment toward the original sender of the packets consolidated in TCP packet 401 1. When the acknowledgement is received, then the original sender of the packets consolidated in TCP packet 401 1 initiates a next packet transmission. It should also be noted that the frequency of the acknowledgments 402 1 from VM 110 1 is less than the frequency of packet transmission to VNIC 125 1. This is due to the consolidation of smaller TCP packets into larger TCP packets 401 at VNIC 125 1.
  • In contrast with VM 110 1, VM 110 2 is a highly latency sensitive virtual machine (based upon the entry corresponding to VM 110 1 in latency sensitivity table 155, depicted in FIG. 2). The transport layer software of VNIC 125 2 (which corresponds to VM 110 2) determines that VM 110 2 is highly latency sensitive and, based on this, disables LRO processing in the VNIC. Thus, as shown in the figure, TCP packets that arrive at VNIC 125 2 from kernel 136 are not consolidated by the VNIC into larger TCP packets. Rather, the received TCP packets are immediately forwarded to VM 110 2, where transport layer software executing therein processes the packets. Further, as shown in the figure, the transport layer software of VM 110 2 sends acknowledgements 402 2 to VNIC 125 2, and on to kernel 136. As with the acknowledgements 402 1 transmitted by VM 110 1, acknowledgments 402 2 are forwarded by kernel 136 to the original sender of the TCP packets. However, the frequency of acknowledgements 402 2 is greater than that of acknowledgements 402 1 because the TCP packets received by VNIC 125 2 are not consolidated. Therefore, the original sender of TCP packets to VNIC 125 2 will receive a greater number of acknowledgements on a more frequent basis. This tends to reduce network latency, as more frequent acknowledgements give rise to more frequent transmission and, therefore, lower transmission delay.
  • FIG. 5 is a conceptual diagram illustrating the adjustment of the interrupt rate in a multi-queue PNIC to accommodate a highly latency sensitive virtual machine, according to one or more embodiments. As mentioned earlier, PNICs are typically configured with one or more transmit and receive queues. When data packets are received at a PNIC, whether from the network that the PNIC connects to or from the operating system that manages the PNIC, the packets are placed into either the transmit queue (for outbound packets) or the receive queue (for inbound packets). The PNIC is configured with a certain interrupt rate, whereby the PNIC generates interrupts to the host when it has packets that are ready to be received from the receive queue or transmitted from the transmit queue. As was the case for VNICs, PNICs may queue data packets and have the packets transmitted (or received) once the queue length exceeds a threshold. At such time, an interrupt is generated and the packets are transmitted or received by the host, depending on the queue that the packets reside in.
  • Multi-queue PNICs are conceptually similar to single queue PNICs. Multi-queue PNICs have more than one transmit queue and more than one receive queue. This is advantageous because it increases the throughput of the PNIC, especially on multiprocessor computer hosts. Further, each transmit or receive queue may be dedicated to a single processor, thus dividing packet processing among processors and freeing certain other processors from the task of processing packets. Further, each transmit or receive queue in a multi-queue PNIC may be assigned to one or more VNICs. That is, multi-queue PNICs are often equipped with a routing module to direct packets destined for certain virtual machines into receive queues that correspond to the VNICs of those virtual machines. In similar fashion, the kernel directs network packets transmitted by certain virtual machines to transmit queues of the PNIC that correspond to those virtual machines.
  • Further, the interrupt rate for a multi-queue PNIC is configurable on a per-queue level. That is, each transmit or receive queue may be configured with its own interrupt rate. This scenario is illustrated in FIG. 5. As described earlier, VM 110 1 is not a highly latency sensitive virtual machine, while VM 110 2 is a highly latency sensitive virtual machine. Networking software in kernel 136 determines the latency sensitivity of each virtual machine based on corresponding entries for the virtual machines in latency sensitivity table 155, as depicted in FIG. 2. In FIG. 5, PNIC 142 is a multi-queue PNIC with two transmit/receive queues: queue 501 1 and queue 501 2. For purposes of illustration, each of queues 501 1 and 501 2 is configured to transmit and receive data packets. As shown, queue 501 1 has been allocated to transmit and receive data packets for VM 110 1. Because VM 110 1 is not highly latency sensitive, the interrupt rate for queue 501 1 is not increased. Therefore, as shown in the figure, packets are accumulated in queue 501 1 until an interrupt is generated. In one or more embodiments, an interrupt is generated for queue 501 1 when the number of packets stored in the queue exceeds a threshold value, or when the packets have been stored in the queue beyond a threshold amount of time.
  • By contrast, kernel 136 determines that queue 501 2, which is allocated to VM 110 2, is allocated to a highly latency sensitive virtual machine. Therefore, in the embodiment depicted, kernel 136 increases the interrupt rate for queue 501 2. This has the effect of suppressing the queuing of data packets in the queue. Thus, when a data packet is placed in the transmit queue of queue 501 2, an interrupt is immediately generated, which causes the packet to be transmitted without any further delay (i.e., without waiting for other packets to be placed in the transmit queue of queue 501 2). Further, if a packet arrives at PNIC 142 and is destined for VM 1102, the packet is routed to the receive queue of queue 501 2, whereupon an interrupt is immediately generated, which causes kernel 136 to transmit the received packet to VM 110 2 without waiting for additional packets to be placed in the receive queue of 501 2. In this way, network latency for VM 110 2 is reduced as compared with the network latency experienced by VM 110 1.
  • FIG. 6 is a flow diagram that depicts an embodiment of a method 600 for transmitting data packets by a VNIC, where the mode of packet transmission is based on the latency sensitivity of the virtual machine to which the VNIC corresponds. In embodiments, method 600 is carried out by software that executes as part of the VNIC. Method 600 begins at step 610, where the VNIC receives a data packet. The received data packet is received from a transmitting application executing under control of the guest operating system of the virtual machine to which the VNIC corresponds. Alternatively, the data packet is received from the kernel, where the packet is to be transmitted to an application executing in the virtual machine.
  • After receiving the data packet at step 610, method 600 proceeds to step 620. At step 620, software that executes as part of the VNIC determines whether the virtual machine to which the VNIC corresponds (which is either the source or destination of the packet) is highly latency sensitive. In one or more embodiments, the VNIC determines the latency sensitivity of the virtual machine by inspecting a memory-based data structure, such as latency sensitivity data 143, which itself is based on latency sensitivity table 155. According to these embodiments, if an entry for the virtual machine in latency sensitivity stores a latency sensitivity indicator that is set to Y (or some other value that indicates that the virtual machine is latency sensitive), then the VNIC determines that the corresponding virtual machine is highly latency sensitive. If, however, the latency sensitive indicator is not set to Y, then the VNIC determines that the virtual machine is not highly latency sensitive.
  • If, at step 620, it is determined that the virtual machine is not highly latency sensitive, then method 600 proceeds to step 650, where the received packet is queued with other packets received by the VNIC, as described below. If, however, it is determined that the virtual machine is highly latency sensitive, then method 600 proceeds to step 630.
  • At step 630, the VNIC determines the rate at which packets are currently being transmitted and/or received by the VNIC. According to embodiments, when the packet rate is high, queuing of data packets is allowed to take place, even for highly latency sensitive virtual machines. The reason is that virtual machines that have high packet rates do not generally suffer when packets are delayed by queuing. For these virtual machines, the system-wide benefits of queuing (i.e., fewer context switches due to a decreased interrupt rate) outweigh the extra packet delay that packet queuing causes. If the VNIC packet rate is determined to be high (i.e., that, over a predetermined time period, a large number of packets are transmitted to the VNIC), then method 600 proceeds to step 650, where the received packet is queued with other packets received by the VNIC. If the VNIC packet rate is determined to be low (i.e., that, over a predetermined time period, a small number of packets are transmitted to the VNIC), then method 600 proceeds to step 640.
  • At step 640, the VNIC determines the CPU utilization of the corresponding virtual machine. According to embodiments, if the CPU utilization of a virtual machine (i.e., the utilization of the virtual CPUs of the virtual machine) is low, then such a virtual machine is often less likely to be compute-bound. That is, the virtual machine is less likely to be executing intensive computations (e.g., calculating prices of financial instruments in a high-speed trading system). Rather, the virtual machine is more likely to be I/O-bound. In other words, the virtual machine is most likely waiting for I/O operations to complete before engaging in computation. In such a scenario, it is important for the virtual machine to experience as little packet delay as possible. On the other hand, in the case of a compute-bound virtual machine, packet delay is relatively unimportant in comparison to any delays in CPU processing, even for virtual machines that are determined to be highly latency sensitive.
  • Therefore, at step 640, if the VNIC determines that the corresponding virtual machine has low CPU utilization (i.e., that the virtual machine is not compute-bound), then method 600 proceeds to step 660. Otherwise, if the VNIC determines that the virtual machine does not have low CPU utilization (i.e., that the virtual machine is in fact compute-bound), then method 600 proceeds to step 650, where the received data packet is queued with other received data packets.
  • At step 660, the VNIC immediately transmits the received data packet, thus minimizing packet delay (and eliminating any delay caused by packet queuing). This scenario is illustrated in FIG. 3, where VNIC 125 2 (which corresponds to VM 1102) does not queue any data packets therein. Thus, the interrupt rate for the VNIC is higher than it would be if packets had been queued at the VNIC. After step 660, method 600 proceeds to step 690.
  • As shown in FIG. 6, step 650 is executed when the data packet is received for a virtual machine that is not highly latency sensitive, or when the virtual machine is highly latency sensitive, but has a high packet rate or high CPU utilization. At step 650, the received data packet is queued with other data packets already received at the VNIC for later transmission. According to embodiments, data packets in the VNIC are queued in a transmit queue (for packets outbound from the corresponding virtual machine) or in a receive queue (for inbound packets). The queuing of data packets is illustrated by VNIC 125 1 (which corresponds to non-highly latency sensitive virtual machine VM 110 1), depicted in FIG. 3.
  • After the received data packet is queued with other data packets for later transmission, method 600 then proceeds to step 670. At step 670, the VNIC determines whether a queuing threshold has been exceeded. For example, the VNIC may determine that either or both transmit and receive queues therein are full, or that the number of packets stored in the queues exceeds a predetermined value. In other embodiments, the VNIC determines that the packets have been stored in the queues for greater than some predetermined amount of time.
  • If, at step 670, the VNIC determines that the queuing threshold has not been exceeded, then method 600 proceeds directly to step 690. However, if the VNIC determines that the queuing threshold has been exceeded, then method 600 proceeds to step 680. At step 680, the queued packets are transmitted by the VNIC. For example, if the queued data packets are to be received by an application executing in the virtual machine, then the VNIC posts a software interrupt to the virtual machine, indicating that the packets are ready to be received by the virtual machine. On the other hand, if the packets are to be transmitted from the virtual machine to another virtual machine (via a virtual switch) or to a target application executing outside of the host computer (via a PNIC of the host computer), then the VNIC posts a software interrupt to the hypervisor (or, in some embodiments, the VNIC makes a kernel call to the hypervisor), indicating that the data packets are ready to be transmitted.
  • After transmitting the data packets at step 680, method 600 proceeds to step 690. At step 690, the VNIC determines whether more data packets should be received. In one or more embodiments, VNIC polls the virtual machine or the hypervisor to determine whether additional packets are available. The polling takes place at a predetermined interval. In other embodiments, the VNIC is enabled to receive a software interrupt from the virtual machine or the hypervisor indicating that additional data packets are ready to be received by the VNIC. If the VNIC determines that more data packets are to be received, then method 600 returns to step 610 to receive the data packet. Method 600 then cycles through the steps described above. If, however, the VNIC determines that there are no more data packets (or, alternatively, that the VNIC has been disabled for receiving data packets), then method 600 terminates.
  • Certain embodiments as described above involve a hardware abstraction layer on top of a host computer. The hardware abstraction layer allows multiple containers to share the hardware resource. These containers, isolated from each other, have at least a user application running therein. The hardware abstraction layer thus provides benefits of resource isolation and allocation among the containers. In the foregoing embodiments, virtual machines are used as an example for the containers and hypervisors as an example for the hardware abstraction layer. As described above, each virtual machine includes a guest operating system in which at least one application runs. It should be noted that these embodiments may also apply to other examples of containers, such as containers not including a guest operating system, referred to herein as “OS-less containers” (see, e.g., www.docker.com). OS-less containers implement operating system-level virtualization, wherein an abstraction layer is provided on top of the kernel of an operating system on a host computer. The abstraction layer supports multiple OS-less containers, each including an application and its dependencies. Each OS-less container runs as an isolated process in userspace on the host operating system and shares the kernel with other containers. The OS-less container relies on the kernel's functionality to make use of resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces and to completely isolate the application's view of the operating environments. By using OS-less containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
  • Although one or more embodiments have been described herein in some detail for clarity of understanding, it should be recognized that certain changes and modifications may be made without departing from the spirit of the disclosure. The various embodiments described herein may employ various computer-implemented operations involving data stored in computer systems. For example, these operations may require physical manipulation of physical quantities—usually, though not necessarily, these quantities may take the form of electrical or magnetic signals, where they or representations of them are capable of being stored, transferred, combined, compared, or otherwise manipulated. Further, such manipulations are often referred to in terms, such as producing, yielding, identifying, determining, or comparing. Any operations described herein that form part of one or more embodiments of the disclosure may be useful machine operations. In addition, one or more embodiments of the disclosure also relate to a device or an apparatus for performing these operations. The apparatus may be specially constructed for specific required purposes, or it may be a general purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general purpose machines may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • The various embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor- based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
  • One or more embodiments of the present disclosure may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system-computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a computer readable medium include a hard drive, network attached storage (NAS), read-only memory, random-access memory (e.g., a flash memory device), a CD (Compact Discs)—CD-ROM, a CD-R, or a CD-RW, a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
  • Although one or more embodiments of the present disclosure have been described in some detail for clarity of understanding, it will be apparent that certain changes and modifications may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein, but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.
  • Many variations, modifications, additions, and improvements are possible. Plural instances may be provided for components, operations or structures described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the disclosure(s). In general, structures and functionality presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claim(s).

Claims (20)

We claim:
1. In a host computer having a plurality of containers including a first container executing therein, the host including a physical network interface controller (NIC), a method of transmitting and receiving data packets to and from the containers, the method comprising:
detecting a packet handling interrupt upon receiving a first data packet that is associated with the first container;
determining whether the first container is latency sensitive by reading latency sensitivity configuration data of the first container;
if the first container is not latency sensitive, then queuing the first data packet for later transmission;
if the first container is latency sensitive, then transmitting the first data packet without queuing the first data packet.
2. The method of claim 1 wherein the first container is a first virtual machine, and wherein the first data packet is received at a virtual NIC.
3. The method of claim 2, wherein the first data packet is received from an application executing in the first virtual machine and the processing of the packet handling interrupt comprises transmitting the first data packet to a virtual switch to which a virtual NIC associated with the first virtual machine is connected.
4. The method of claim 2, wherein the first data packet is received from a hypervisor executing on the host computer and the processing of the packet handling interrupt comprises transmitting the first data packet to an application executing in the first virtual machine.
5. The method of claim 3, wherein determining whether the first virtual machine is latency sensitive comprises:
reading a latency sensitivity indicator for the first virtual machine; and
determining whether the latency sensitivity indicator is a predetermined value.
6. The method of claim 5, wherein queuing the first data packet comprises storing the first data packet in a data structure associated with the virtual NIC that is configured to store a plurality of data packets.
7. The method of claim 6, further comprising:
determining whether a packet rate for the virtual NIC is less than a predetermined threshold rate;
if the packet rate for the virtual NIC is less than the predetermined threshold rate, then queueing the first data packet and delaying processing of the packet handling interrupt; and
if the packet rate for the virtual NIC is not less than the predetermined threshold rate, then processing the packet handling interrupt.
8. The method of claim 6, further comprising:
determining whether a utilization value for one or more virtual processors of the first virtual machine is greater than a predetermined utilization value;
if the utilization value for the one or more virtual processors of the first virtual machine is greater than the predetermined utilization value, then queuing the first data packet and delaying processing of the packet handling interrupt; and
if the utilization value for the one or more virtual processors of the first virtual machine is not greater than the predetermined utilization value, then processing the packet handling interrupt.
9. The method of claim 6, wherein the queuing of the first data packet for later transmission comprises delaying transmission of the first data packet until the first data packet has been stored in the data structure greater than a predetermined amount of time.
10. The method of claim 6, wherein the queuing of the first data packet for later transmission further comprises:
delaying transmission of the first data packet until the data structure stores greater than a predetermined number of data packets.
11. The method of claim 3, wherein the first data packet is a first transmission control protocol (TCP) data packet, and queuing the first data packet comprises combining said first TCP data packet with one or more TCP packets previously received at the virtual NIC into a second TCP data packet.
12. The method of claim 2, wherein the physical NIC has a plurality of queues, each of which is associated with one or more of the plurality of virtual machines, and wherein the method further comprises:
responsive to determining that the first virtual machine is latency sensitive, increasing an interrupt rate for the one or more queues of the physical NIC that are associated with the first virtual machine.
13. A non-transitory computer-readable medium comprising instructions executable by a host computer, the host computer having a plurality of containers including a first container executing therein, and the host including a physical network interface controller (NIC), where the instructions, when executed, cause the host computer to perform a method of transmitting and receiving data packets to and from the first container, the method comprising:
detecting a packet handling interrupt upon receiving a first data packet that is associated with the first container;
determining whether the first container is latency sensitive;
if the first container is latency sensitive, then transmitting the first data packet; and
if the first container not latency sensitive, then queueing the first data packet for later transmission.
14. The computer-readable medium of claim 13, wherein the first container is a first virtual machine, and wherein the first data packet is received at a virtual NIC.
15. The computer-readable medium of claim 14, wherein the first data packet is received from an application executing in the first virtual machine and the processing of the packet handling interrupt comprises transmitting the first data packet to a virtual switch to which a virtual NIC that is associated with the first virtual machine is connected.
16. The computer-readable medium of claim 14, wherein the first data packet is received from a hypervisor executing on the host computer and the processing of the packet handling interrupt comprises transmitting the first data packet to an application executing in the first virtual machine.
17. The computer-readable medium of claim 15, wherein determining whether the first virtual machine is latency sensitive comprises:
reading a latency sensitivity indicator for the first virtual machine; and
determining whether the latency sensitivity indicator is a predetermined value.
18. The computer-readable medium of claim 17, wherein queuing the first data packet comprises storing the first data packet in a data structure associated with the virtual NIC that is configured to store a plurality of data packets.
19. The computer-readable medium of claim 18, further comprising:
determining whether a utilization value for one or more virtual processors of the first virtual machine is greater than a predetermined utilization value;
if the utilization value for the one or more virtual processors of the first virtual machine is greater than the predetermined utilization value, then queuing the first data packet and delaying processing of the packet handling interrupt; and
if the utilization value for the one or more virtual processors of the first virtual machine is not greater than the predetermined utilization value, then processing the packet handling interrupt.
20. A computing system, comprising:
a host computer, the host computer having a plurality of containers including a first containerexecuting therein; and
a physical network interface controller (NIC), wherein the system is configured to perform a method of transmitting and receiving data packets to and from the first container, the method comprising:
detecting a packet handling interrupt upon receiving a first data packet associated with the first container;
determining whether the first containeris latency sensitive;
if the first container is latency sensitive, then processing the packet handling interrupt;
if the first container is not latency sensitive, then:
queueing the first data packet; and
delaying processing of the packet handling interrupt.
US15/645,469 2013-08-26 2017-07-10 Networking stack of virtualization software configured to support latency sensitive virtual machines Active US10860356B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/645,469 US10860356B2 (en) 2013-08-26 2017-07-10 Networking stack of virtualization software configured to support latency sensitive virtual machines

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201361870143P 2013-08-26 2013-08-26
US14/468,181 US9703589B2 (en) 2013-08-26 2014-08-25 Networking stack of virtualization software configured to support latency sensitive virtual machines
US15/645,469 US10860356B2 (en) 2013-08-26 2017-07-10 Networking stack of virtualization software configured to support latency sensitive virtual machines

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/468,181 Continuation US9703589B2 (en) 2013-08-26 2014-08-25 Networking stack of virtualization software configured to support latency sensitive virtual machines

Publications (2)

Publication Number Publication Date
US20170308394A1 true US20170308394A1 (en) 2017-10-26
US10860356B2 US10860356B2 (en) 2020-12-08

Family

ID=51535540

Family Applications (8)

Application Number Title Priority Date Filing Date
US14/468,181 Active 2035-02-18 US9703589B2 (en) 2013-08-26 2014-08-25 Networking stack of virtualization software configured to support latency sensitive virtual machines
US14/468,121 Active US9262198B2 (en) 2013-08-26 2014-08-25 CPU scheduler configured to support latency sensitive virtual machines
US14/468,122 Active 2034-09-19 US9317318B2 (en) 2013-08-26 2014-08-25 Virtual machine monitor configured to support latency sensitive virtual machines
US14/468,138 Active 2035-01-28 US9552216B2 (en) 2013-08-26 2014-08-25 Pass-through network interface controller configured to support latency sensitive virtual machines
US15/044,035 Active US9652280B2 (en) 2013-08-26 2016-02-15 CPU scheduler configured to support latency sensitive virtual machines
US15/097,035 Active US10073711B2 (en) 2013-08-26 2016-04-12 Virtual machine monitor configured to support latency sensitive virtual machines
US15/592,957 Active US10061610B2 (en) 2013-08-26 2017-05-11 CPU scheduler configured to support latency sensitive virtual machines
US15/645,469 Active US10860356B2 (en) 2013-08-26 2017-07-10 Networking stack of virtualization software configured to support latency sensitive virtual machines

Family Applications Before (7)

Application Number Title Priority Date Filing Date
US14/468,181 Active 2035-02-18 US9703589B2 (en) 2013-08-26 2014-08-25 Networking stack of virtualization software configured to support latency sensitive virtual machines
US14/468,121 Active US9262198B2 (en) 2013-08-26 2014-08-25 CPU scheduler configured to support latency sensitive virtual machines
US14/468,122 Active 2034-09-19 US9317318B2 (en) 2013-08-26 2014-08-25 Virtual machine monitor configured to support latency sensitive virtual machines
US14/468,138 Active 2035-01-28 US9552216B2 (en) 2013-08-26 2014-08-25 Pass-through network interface controller configured to support latency sensitive virtual machines
US15/044,035 Active US9652280B2 (en) 2013-08-26 2016-02-15 CPU scheduler configured to support latency sensitive virtual machines
US15/097,035 Active US10073711B2 (en) 2013-08-26 2016-04-12 Virtual machine monitor configured to support latency sensitive virtual machines
US15/592,957 Active US10061610B2 (en) 2013-08-26 2017-05-11 CPU scheduler configured to support latency sensitive virtual machines

Country Status (5)

Country Link
US (8) US9703589B2 (en)
EP (2) EP3039540B1 (en)
JP (2) JP6126312B2 (en)
AU (2) AU2014311463B2 (en)
WO (4) WO2015031277A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10904167B2 (en) * 2019-04-25 2021-01-26 Red Hat, Inc. Incoming packet processing for a computer system
WO2024172705A1 (en) * 2023-02-15 2024-08-22 Telefonaktiebolaget Lm Ericsson (Publ) Handling transmission of packets from virtualized environment hosted by tsn entity

Families Citing this family (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9703589B2 (en) 2013-08-26 2017-07-11 Vmware, Inc. Networking stack of virtualization software configured to support latency sensitive virtual machines
CN103473136B (en) * 2013-09-02 2017-06-13 华为技术有限公司 The resource allocation method and communication equipment of a kind of virtual machine
US9792152B2 (en) * 2013-12-20 2017-10-17 Red Hat Israel, Ltd. Hypervisor managed scheduling of virtual machines
US10289437B2 (en) 2014-01-07 2019-05-14 Red Hat Israel, Ltd. Idle processor management in virtualized systems via paravirtualization
US10365936B2 (en) * 2014-02-27 2019-07-30 Red Hat Israel, Ltd. Idle processor management by guest in virtualized systems
US9495192B2 (en) 2014-09-30 2016-11-15 Vmware, Inc. NUMA I/O aware network queue assignments
US9971620B2 (en) * 2014-10-15 2018-05-15 Keysight Technologies Singapore (Holdings) Pte Ltd Methods and systems for network packet impairment within virtual machine host systems
US9971619B2 (en) 2014-10-15 2018-05-15 Keysight Technologies Singapore (Holdings) Pte Ltd Methods and systems for forwarding network packets within virtual machine host systems
US10387178B2 (en) * 2014-10-29 2019-08-20 Red Hat Israel, Ltd. Idle based latency reduction for coalesced interrupts
TWI574158B (en) * 2014-12-01 2017-03-11 旺宏電子股份有限公司 Data processing method and system with application-level information awareness
US20160173600A1 (en) * 2014-12-15 2016-06-16 Cisco Technology, Inc. Programmable processing engine for a virtual interface controller
US10320921B2 (en) 2014-12-17 2019-06-11 Vmware, Inc. Specializing virtual network device processing to bypass forwarding elements for high packet rate applications
US9699060B2 (en) * 2014-12-17 2017-07-04 Vmware, Inc. Specializing virtual network device processing to avoid interrupt processing for high packet rate applications
US9778957B2 (en) * 2015-03-31 2017-10-03 Stitch Fix, Inc. Systems and methods for intelligently distributing tasks received from clients among a plurality of worker resources
JP6488910B2 (en) * 2015-06-24 2019-03-27 富士通株式会社 Control method, control program, and information processing apparatus
US9772792B1 (en) * 2015-06-26 2017-09-26 EMC IP Holding Company LLC Coordinated resource allocation between container groups and storage groups
US10002016B2 (en) * 2015-07-23 2018-06-19 Red Hat, Inc. Configuration of virtual machines in view of response time constraints
US9942131B2 (en) * 2015-07-29 2018-04-10 International Business Machines Corporation Multipathing using flow tunneling through bound overlay virtual machines
US9667725B1 (en) 2015-08-06 2017-05-30 EMC IP Holding Company LLC Provisioning isolated storage resource portions for respective containers in multi-tenant environments
US10356012B2 (en) 2015-08-20 2019-07-16 Intel Corporation Techniques for routing packets among virtual machines
US10146936B1 (en) 2015-11-12 2018-12-04 EMC IP Holding Company LLC Intrusion detection for storage resources provisioned to containers in multi-tenant environments
US10261782B2 (en) 2015-12-18 2019-04-16 Amazon Technologies, Inc. Software container registry service
US10032032B2 (en) 2015-12-18 2018-07-24 Amazon Technologies, Inc. Software container registry inspection
US10002247B2 (en) * 2015-12-18 2018-06-19 Amazon Technologies, Inc. Software container registry container image deployment
KR101809528B1 (en) * 2015-12-30 2017-12-15 무진기공주식회사 Reactor using for Bio-diesel manufacturing
US10713195B2 (en) 2016-01-15 2020-07-14 Intel Corporation Interrupts between virtual machines
US9569277B1 (en) 2016-01-29 2017-02-14 International Business Machines Corporation Rebalancing virtual resources for virtual machines based on multiple resource capacities
US9983909B1 (en) 2016-03-15 2018-05-29 EMC IP Holding Company LLC Converged infrastructure platform comprising middleware preconfigured to support containerized workloads
US10326744B1 (en) 2016-03-21 2019-06-18 EMC IP Holding Company LLC Security layer for containers in multi-tenant environments
US11221875B2 (en) * 2016-03-31 2022-01-11 Intel Corporation Cooperative scheduling of virtual machines
US10552205B2 (en) * 2016-04-02 2020-02-04 Intel Corporation Work conserving, load balancing, and scheduling
US10013213B2 (en) 2016-04-22 2018-07-03 EMC IP Holding Company LLC Container migration utilizing state storage of partitioned storage volume
CN106027643B (en) * 2016-05-18 2018-10-23 无锡华云数据技术服务有限公司 A kind of resource regulating method based on Kubernetes container cluster management systems
US20180004452A1 (en) * 2016-06-30 2018-01-04 Intel Corporation Technologies for providing dynamically managed quality of service in a distributed storage system
US10176007B2 (en) * 2016-08-30 2019-01-08 Red Hat Israel, Ltd. Guest code emulation by virtual machine function
JP6511025B2 (en) * 2016-09-02 2019-05-08 日本電信電話株式会社 Resource allocation apparatus, resource allocation method and resource allocation program
US20180088977A1 (en) * 2016-09-28 2018-03-29 Mark Gray Techniques to determine and mitigate latency in virtual environments
US10452572B2 (en) 2016-10-06 2019-10-22 Vmware, Inc. Automatic system service resource management for virtualizing low-latency workloads that are input/output intensive
US10733591B2 (en) * 2016-10-11 2020-08-04 International Business Machines Corporation Tiered model for event-based serverless computing
CN108023837B (en) * 2016-10-31 2020-11-20 鸿富锦精密电子(天津)有限公司 Virtual network switch system and establishing method thereof
US10284557B1 (en) 2016-11-17 2019-05-07 EMC IP Holding Company LLC Secure data proxy for cloud computing environments
US10417049B2 (en) 2016-11-28 2019-09-17 Amazon Technologies, Inc. Intra-code communication in a localized device coordinator
US10452439B2 (en) 2016-11-28 2019-10-22 Amazon Technologies, Inc. On-demand code execution in a localized device coordinator
US10637817B2 (en) 2016-11-28 2020-04-28 Amazon Technologies, Inc. Managing messaging protocol communications
EP3545414A1 (en) * 2016-11-28 2019-10-02 Amazon Technologies Inc. On-demand code execution in a localized device coordinator
US10783016B2 (en) 2016-11-28 2020-09-22 Amazon Technologies, Inc. Remote invocation of code execution in a localized device coordinator
US10608973B2 (en) 2016-11-28 2020-03-31 Amazon Technologies, Inc. Embedded codes in messaging protocol communications
US10216540B2 (en) 2016-11-28 2019-02-26 Amazon Technologies, Inc. Localized device coordinator with on-demand code execution capabilities
US10372486B2 (en) 2016-11-28 2019-08-06 Amazon Technologies, Inc. Localized device coordinator
US10055248B1 (en) 2017-02-22 2018-08-21 Red Hat, Inc. Virtual processor scheduling via memory monitoring
US10310887B2 (en) 2017-02-22 2019-06-04 Red Hat, Inc. CPU overcommit with guest idle polling
US11128437B1 (en) 2017-03-30 2021-09-21 EMC IP Holding Company LLC Distributed ledger for peer-to-peer cloud resource sharing
US10956193B2 (en) * 2017-03-31 2021-03-23 Microsoft Technology Licensing, Llc Hypervisor virtual processor execution with extra-hypervisor scheduling
US10402341B2 (en) 2017-05-10 2019-09-03 Red Hat Israel, Ltd. Kernel-assisted inter-process data transfer
US11055133B2 (en) 2017-05-26 2021-07-06 Red Hat, Inc. Node-local-unscheduler for scheduling remediation
US10437308B2 (en) 2017-06-05 2019-10-08 Red Hat, Inc. Predictive virtual machine halt
CN109144844B (en) 2017-06-27 2023-01-31 阿里巴巴集团控股有限公司 Tracking method, device, equipment and machine readable medium
US10645093B2 (en) * 2017-07-11 2020-05-05 Nicira, Inc. Reduction in secure protocol overhead when transferring packets between hosts
US10394603B2 (en) * 2017-07-28 2019-08-27 Genband Us Llc Virtual container processing on high performance computing processors
US11295382B2 (en) 2017-09-12 2022-04-05 Mark Gimple System and method for global trading exchange
US10474392B2 (en) * 2017-09-19 2019-11-12 Microsoft Technology Licensing, Llc Dynamic scheduling for virtual storage devices
CN109522101B (en) * 2017-09-20 2023-11-14 三星电子株式会社 Method, system and/or apparatus for scheduling multiple operating system tasks
US10810038B2 (en) 2017-09-22 2020-10-20 International Business Machines Corporation Accounting and enforcing non-process execution by container-based software receiving data over a network
US10545786B2 (en) 2017-09-22 2020-01-28 International Business Machines Corporation Accounting and enforcing non-process execution by container-based software transmitting data over a network
US10630642B2 (en) 2017-10-06 2020-04-21 Stealthpath, Inc. Methods for internet communication security
US10367811B2 (en) 2017-10-06 2019-07-30 Stealthpath, Inc. Methods for internet communication security
US10361859B2 (en) 2017-10-06 2019-07-23 Stealthpath, Inc. Methods for internet communication security
US10374803B2 (en) 2017-10-06 2019-08-06 Stealthpath, Inc. Methods for internet communication security
US10375019B2 (en) 2017-10-06 2019-08-06 Stealthpath, Inc. Methods for internet communication security
US10397186B2 (en) 2017-10-06 2019-08-27 Stealthpath, Inc. Methods for internet communication security
US11159627B1 (en) 2017-10-20 2021-10-26 Parallels International Gmbh Seamless remote network redirection
US10581636B1 (en) * 2017-10-20 2020-03-03 Parallels International Gmbh Network tunneling for virtual machines across a wide-area network
US10966073B2 (en) 2017-11-22 2021-03-30 Charter Communications Operating, Llc Apparatus and methods for premises device existence and capability determination
CN108196958B (en) * 2017-12-29 2020-09-29 北京泽塔云科技股份有限公司 Resource scheduling and distributing method, computer system and super-fusion architecture system
CN108279979B (en) * 2018-01-19 2021-02-19 聚好看科技股份有限公司 Method and device for binding CPU for application program container
US11063745B1 (en) 2018-02-13 2021-07-13 EMC IP Holding Company LLC Distributed ledger for multi-cloud service automation
CN108897622A (en) * 2018-06-29 2018-11-27 郑州云海信息技术有限公司 A kind of dispatching method and relevant apparatus of task run
CN109117265A (en) * 2018-07-12 2019-01-01 北京百度网讯科技有限公司 The method, apparatus, equipment and storage medium of schedule job in the cluster
US11036555B2 (en) 2018-07-25 2021-06-15 Vmware, Inc. Virtual processor allocation with execution guarantee
US10691495B2 (en) * 2018-07-25 2020-06-23 Vmware, Inc. Virtual processor allocation with execution guarantee
US10708082B1 (en) * 2018-08-31 2020-07-07 Juniper Networks, Inc. Unified control plane for nested clusters in a virtualized computing infrastructure
CN109343947A (en) * 2018-09-26 2019-02-15 郑州云海信息技术有限公司 A kind of resource regulating method and device
US11200331B1 (en) 2018-11-21 2021-12-14 Amazon Technologies, Inc. Management of protected data in a localized device coordinator
GB2570991B (en) * 2018-12-14 2020-04-22 Lendinvest Ltd Instruction allocation and processing system and method
JP7151530B2 (en) * 2019-02-13 2022-10-12 日本電信電話株式会社 Server infrastructure and physical CPU allocation program
US11129171B2 (en) 2019-02-27 2021-09-21 Charter Communications Operating, Llc Methods and apparatus for wireless signal maximization and management in a quasi-licensed wireless system
US11372654B1 (en) 2019-03-25 2022-06-28 Amazon Technologies, Inc. Remote filesystem permissions management for on-demand code execution
US11374779B2 (en) 2019-06-30 2022-06-28 Charter Communications Operating, Llc Wireless enabled distributed data apparatus and methods
US12026286B2 (en) 2019-07-10 2024-07-02 Hewlett-Packard Development Company, L.P. Executing containers during idle states
US11182222B2 (en) * 2019-07-26 2021-11-23 Charter Communications Operating, Llc Methods and apparatus for multi-processor device software development and operation
US11368552B2 (en) 2019-09-17 2022-06-21 Charter Communications Operating, Llc Methods and apparatus for supporting platform and application development and operation
CN112631744A (en) * 2019-09-24 2021-04-09 阿里巴巴集团控股有限公司 Process processing method and device, electronic equipment and computer readable storage medium
US11558423B2 (en) 2019-09-27 2023-01-17 Stealthpath, Inc. Methods for zero trust security with high quality of service
US11026205B2 (en) 2019-10-23 2021-06-01 Charter Communications Operating, Llc Methods and apparatus for device registration in a quasi-licensed wireless system
US11144419B2 (en) 2019-10-25 2021-10-12 Red Hat, Inc. Controlled use of a memory monitor instruction and memory wait instruction in a virtualized environment
US11457485B2 (en) 2019-11-06 2022-09-27 Charter Communications Operating, Llc Methods and apparatus for enhancing coverage in quasi-licensed wireless systems
US11347558B2 (en) 2019-12-09 2022-05-31 Nutanix, Inc. Security-aware scheduling of virtual machines in a multi-tenant infrastructure
CN111107100B (en) * 2019-12-30 2022-03-01 杭州迪普科技股份有限公司 Equipment for transmitting industrial protocol flow message
US11363466B2 (en) 2020-01-22 2022-06-14 Charter Communications Operating, Llc Methods and apparatus for antenna optimization in a quasi-licensed wireless system
US11074202B1 (en) 2020-02-26 2021-07-27 Red Hat, Inc. Efficient management of bus bandwidth for multiple drivers
CN111427669A (en) * 2020-04-27 2020-07-17 安谋科技(中国)有限公司 Method, apparatus, medium, and system for managing virtual machines on computer device
CN111769910B (en) * 2020-06-28 2023-04-18 网宿科技股份有限公司 Data transmission method and device
US12089240B2 (en) 2020-07-06 2024-09-10 Charter Communications Operating, Llc Methods and apparatus for access node selection and link optimization in quasi-licensed wireless systems
CN111831398A (en) * 2020-07-20 2020-10-27 平安科技(深圳)有限公司 Virtual machine creation and CPU resource allocation method, device and equipment
US12014197B2 (en) * 2020-07-21 2024-06-18 VMware LLC Offloading packet processing programs from virtual machines to a hypervisor and efficiently executing the offloaded packet processing programs
US11429424B2 (en) * 2020-07-22 2022-08-30 Vmware, Inc. Fine-grained application-aware latency optimization for virtual machines at runtime
US11656100B2 (en) 2020-10-08 2023-05-23 Pulse Innovation Labs, Inc. Angular displacement sensor
CN112667364B (en) * 2021-01-05 2022-07-01 烽火通信科技股份有限公司 Virtual mixed deployment method, device, equipment and storage medium for bound core and non-bound core
JP7485102B2 (en) * 2021-02-12 2024-05-16 日本電信電話株式会社 RESOURCE ALLOCATION UPDATE DEVICE, RESOURCE ALLOCATION UPDATE METHOD, PROGRAM, AND VIRTUAL MACHINE/CONTAINER CONTROL SYSTEM
US20220286915A1 (en) * 2021-03-05 2022-09-08 Vmware, Inc. Distributed ric
CN113282525B (en) * 2021-05-27 2023-03-28 杭州迪普科技股份有限公司 Message distribution method and device
US11595321B2 (en) 2021-07-06 2023-02-28 Vmware, Inc. Cluster capacity management for hyper converged infrastructure updates
DE112021008039T5 (en) * 2021-07-28 2024-05-23 Mitsubishi Electric Corporation INFORMATION PROCESSING DEVICE
EP4396690A1 (en) * 2021-09-03 2024-07-10 Groq, Inc. Scale computing in deterministic cloud environments
CN113553164B (en) * 2021-09-17 2022-02-25 统信软件技术有限公司 Process migration method, computing device and storage medium
US11716378B2 (en) 2021-09-28 2023-08-01 Red Hat, Inc. Optimized network device queue management for hybrid cloud networking workloads
CN114327814A (en) * 2021-12-09 2022-04-12 阿里巴巴(中国)有限公司 Task scheduling method, virtual machine, physical host and storage medium

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7065762B1 (en) 1999-03-22 2006-06-20 Cisco Technology, Inc. Method, apparatus and computer program product for borrowed-virtual-time scheduling
US7236459B1 (en) * 2002-05-06 2007-06-26 Packeteer, Inc. Method and apparatus for controlling data transmission volume using explicit rate control and queuing without data rate supervision
US7765543B1 (en) * 2003-12-17 2010-07-27 Vmware, Inc. Selective descheduling of idling guests running on a host computer system
US7626988B2 (en) * 2004-06-09 2009-12-01 Futurewei Technologies, Inc. Latency-based scheduling and dropping
US8005022B2 (en) 2006-07-20 2011-08-23 Oracle America, Inc. Host operating system bypass for packets destined for a virtual machine
US7826468B2 (en) * 2006-08-04 2010-11-02 Fujitsu Limited System and method for bypassing an output queue structure of a switch
US8458366B2 (en) * 2007-09-27 2013-06-04 Oracle America, Inc. Method and system for onloading network services
US20100106874A1 (en) 2008-10-28 2010-04-29 Charles Dominguez Packet Filter Optimization For Network Interfaces
JP2010122805A (en) 2008-11-18 2010-06-03 Hitachi Ltd Virtual server system, physical cpu and method for allocating physical memory
JP4871948B2 (en) * 2008-12-02 2012-02-08 株式会社日立製作所 Virtual computer system, hypervisor in virtual computer system, and scheduling method in virtual computer system
US8719823B2 (en) * 2009-03-04 2014-05-06 Vmware, Inc. Managing latency introduced by virtualization
US8478924B2 (en) * 2009-04-24 2013-07-02 Vmware, Inc. Interrupt coalescing for outstanding input/output completions
US8194670B2 (en) * 2009-06-30 2012-06-05 Oracle America, Inc. Upper layer based dynamic hardware transmit descriptor reclaiming
JP2011018136A (en) * 2009-07-07 2011-01-27 Fuji Xerox Co Ltd Image processing apparatus and program
US8245234B2 (en) * 2009-08-10 2012-08-14 Avaya Inc. Credit scheduler for ordering the execution of tasks
US8364997B2 (en) 2009-12-22 2013-01-29 Intel Corporation Virtual-CPU based frequency and voltage scaling
CN102667725B (en) * 2010-01-13 2015-09-16 马维尔以色列(M.I.S.L.)有限公司 For the hardware virtualization of media processing
US20110197004A1 (en) 2010-02-05 2011-08-11 Serebrin Benjamin C Processor Configured to Virtualize Guest Local Interrupt Controller
US8312463B2 (en) * 2010-03-30 2012-11-13 Microsoft Corporation Resource management in computing scenarios
JP5770721B2 (en) * 2010-05-24 2015-08-26 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Information processing system
US8533713B2 (en) 2011-03-29 2013-09-10 Intel Corporation Efficent migration of virtual functions to enable high availability and resource rebalance
WO2012151392A1 (en) * 2011-05-04 2012-11-08 Citrix Systems, Inc. Systems and methods for sr-iov pass-thru via an intermediary device
JP5624084B2 (en) * 2012-06-04 2014-11-12 株式会社日立製作所 Computer, virtualization mechanism, and scheduling method
US8943252B2 (en) 2012-08-16 2015-01-27 Microsoft Corporation Latency sensitive software interrupt and thread scheduling
US9317310B2 (en) 2013-01-31 2016-04-19 Broadcom Corporation Systems and methods for handling virtual machine packets
US9720717B2 (en) * 2013-03-14 2017-08-01 Sandisk Technologies Llc Virtualization support for storage devices
US9703589B2 (en) 2013-08-26 2017-07-11 Vmware, Inc. Networking stack of virtualization software configured to support latency sensitive virtual machines

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10904167B2 (en) * 2019-04-25 2021-01-26 Red Hat, Inc. Incoming packet processing for a computer system
WO2024172705A1 (en) * 2023-02-15 2024-08-22 Telefonaktiebolaget Lm Ericsson (Publ) Handling transmission of packets from virtualized environment hosted by tsn entity

Also Published As

Publication number Publication date
US20160224370A1 (en) 2016-08-04
US9262198B2 (en) 2016-02-16
WO2015031272A1 (en) 2015-03-05
US9552216B2 (en) 2017-01-24
JP2016529614A (en) 2016-09-23
WO2015031274A1 (en) 2015-03-05
US20150055499A1 (en) 2015-02-26
EP3039539B1 (en) 2019-02-20
US10073711B2 (en) 2018-09-11
US9317318B2 (en) 2016-04-19
AU2014311461A1 (en) 2016-02-18
EP3039539A1 (en) 2016-07-06
US20170249186A1 (en) 2017-08-31
EP3039540B1 (en) 2021-08-11
AU2014311461B2 (en) 2017-02-16
US9703589B2 (en) 2017-07-11
US9652280B2 (en) 2017-05-16
JP6126312B2 (en) 2017-05-10
US20160162336A1 (en) 2016-06-09
AU2014311463B2 (en) 2017-02-16
US20150058846A1 (en) 2015-02-26
JP6126311B2 (en) 2017-05-10
WO2015031277A1 (en) 2015-03-05
WO2015031279A1 (en) 2015-03-05
US10061610B2 (en) 2018-08-28
AU2014311463A1 (en) 2016-02-18
US20150058847A1 (en) 2015-02-26
US20150058861A1 (en) 2015-02-26
JP2016529613A (en) 2016-09-23
US10860356B2 (en) 2020-12-08
EP3039540A1 (en) 2016-07-06

Similar Documents

Publication Publication Date Title
US10860356B2 (en) Networking stack of virtualization software configured to support latency sensitive virtual machines
US9804904B2 (en) High-performance virtual machine networking
Ram et al. {Hyper-Switch}: A Scalable Software Virtual Switching Architecture
US8392623B2 (en) Guest/hypervisor interrupt coalescing for storage adapter virtual function in guest passthrough mode
Liu et al. High Performance VMM-Bypass I/O in Virtual Machines.
EP2122474B1 (en) Optimized interrupt delivery in a virtualized environment
US9871734B2 (en) Prioritized handling of incoming packets by a network interface controller
US8601496B2 (en) Method and system for protocol offload in paravirtualized systems
US9019826B2 (en) Hierarchical allocation of network bandwidth for quality of service
US20070168525A1 (en) Method for improved virtual adapter performance using multiple virtual interrupts
US8924501B2 (en) Application-driven shared device queue polling
US20190042151A1 (en) Hybrid framework of nvme-based storage system in cloud computing environment
Garzarella et al. Virtual device passthrough for high speed VM networking
CN114490499A (en) System, apparatus and method for streaming input/output data
Bourguiba et al. Packet aggregation based network I/O virtualization for cloud computing
Li et al. Prioritizing soft real-time network traffic in virtualized hosts based on xen
Chang et al. Virtualization technology for TCP/IP offload engine
Li et al. Virtualization-aware traffic control for soft real-time network traffic on Xen
Inoue et al. Low-latency and high bandwidth TCP/IP protocol processing through an integrated HW/SW approach
Dittia et al. DMA Mechanisms for High Performance Network Interfaces

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

AS Assignment

Owner name: VMWARE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHENG, HAOQIANG;SINGARAVELU, LENIN;AGARWAL, SHILPI;AND OTHERS;SIGNING DATES FROM 20140905 TO 20151228;REEL/FRAME:053512/0715

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: VMWARE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:067103/0030

Effective date: 20231121

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4