US20200019429A1 - Hot-plugging of virtual functions in a virtualized environment - Google Patents

Hot-plugging of virtual functions in a virtualized environment Download PDF

Info

Publication number
US20200019429A1
US20200019429A1 US16/579,519 US201916579519A US2020019429A1 US 20200019429 A1 US20200019429 A1 US 20200019429A1 US 201916579519 A US201916579519 A US 201916579519A US 2020019429 A1 US2020019429 A1 US 2020019429A1
Authority
US
United States
Prior art keywords
virtual
logical network
virtual machine
computer system
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/579,519
Other versions
US11061712B2 (en
Inventor
Alona Kaplan
Michael Kolesnik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Red Hat Israel Ltd
Original Assignee
Red Hat Israel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Red Hat Israel Ltd filed Critical Red Hat Israel Ltd
Priority to US16/579,519 priority Critical patent/US11061712B2/en
Assigned to RED HAT ISRAEL, LTD. reassignment RED HAT ISRAEL, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAPLAN, ALONA, KOLESNIK, MICHAEL
Publication of US20200019429A1 publication Critical patent/US20200019429A1/en
Application granted granted Critical
Publication of US11061712B2 publication Critical patent/US11061712B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Definitions

  • the disclosure is generally related to network devices, and is more specifically related to hot-plugging of virtual functions in a virtualized environment.
  • a virtual machine is a portion of software that, when executed on appropriate hardware, creates an environment allowing for the virtualization of an actual physical computer system (e.g., a server, a mainframe computer, etc.).
  • the actual physical computer system is typically referred to as a “host machine,” and the operating system (OS) of the host machine is typically referred to as the “host operating system.”
  • the operating system (OS) of the virtual machine is typically referred to as the “guest operating system.”
  • FIG. 1 depicts a block diagram of an example of networked system architecture for hot-plugging of virtual functions in a virtualized environment in accordance with one or more aspects of the disclosure.
  • FIG. 2 illustrates an example functional a management computing system including a memory for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • FIG. 3 depicts another view of the example networked system architecture of FIG. 1 in accordance with one or more aspects of the disclosure.
  • FIG. 4 depicts a flow diagram of a method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • FIG. 5 depicts a flow diagram of another method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • FIG. 6 depicts a block diagram of an illustrative computing device operating in accordance with the examples of the disclosure.
  • a virtual machine may comprise one or more “virtual devices,” each of which may be associated with a physical device (e.g., a network interface device, an I/O device such as a CD-ROM drive, a disk array, etc.) of a host machine.
  • a virtualization management system, or “virtualization manager,” can manage the allocation of the host resources to the VMs, monitor the status of the VMs, as well as the progress of commands and processes being executed by the VMs, and generally manage operations in the system.
  • a network interface controller is a network hardware device that connects the host machine to a network.
  • the NIC may include electronic circuitry used to communicate via a physical layer and data link layer standard, such as Ethernet, Fibre Channel, Wi-Fi, or Token Ring, to name a few examples.
  • the NIC may implement the Open Systems Interconnection (OSI) layer 1 (physical layer) and OSI layer 2 (data link layer) standards, thus providing physical access to a networking medium and a low-level addressing system using media access control (MAC) addresses, in order to allow computing devices to communicate over a wired or wireless network.
  • OSI Open Systems Interconnection
  • MAC media access control
  • Single Root I/O Virtualization is a virtualization networking specification that allows a physical device, such as a NIC, to isolate access to its PCI Express (PCIe) hardware resources for manageability and performance reasons.
  • PCIe PCI Express
  • the SR-IOV allows different VMs in a virtual environment to share a single PCI Express hardware interface without interfering with each other.
  • These hardware resources may include physical functions and virtual functions.
  • Physical functions PFs
  • VFs are full-featured PCIe devices that include all configuration resources and SR-IOV capabilities for the device.
  • Virtual functions are “lightweight” PCIe functions that contain the resources necessary for data movement, but have a minimized set of configuration resources.
  • a physical NIC on a host machine that is configured based on SR-IOV specifications can enable network traffic to flow directly between the VMs that share the same host and VFs of the SR-IOV-enabled NIC.
  • a SR-IOV-enabled NIC may be referred to as supporting a pass-through mode for assigning I/O devices to VMs.
  • a virtual device such as a virtual NIC (vNIC), associated with a virtual machine can be connected directly (e.g., as opposed to being connect via the virtualization layer of the host machine) to a virtual function using the SR-IOV. This may reduce data transfer latency between the VM and a physical device due to various issues, such as a virtualization layer associated with a host machine or lower CPU utilization devoted to data packet transfers.
  • the virtualization manager may create a VM with SR-IOV virtual function capabilities (e.g., assign a virtual function to the vNIC of the VM).
  • the VM may be used to run client applications.
  • the virtualization manager may not be able to identify an available VF of a SR-IOV enabled NIC associated with a certain network for assigning to the VM.
  • all current VFs associated with a host system may be already attached to other VMs.
  • the virtualization manager may generate an error or other types of indicators that there are no available VF's on the system, which can block or otherwise adversely affect the client applications that intended to use the VM.
  • aspects of the present disclosure can provide the ability for a virtualization manager to hot-plug a SR-IOV virtual function for a VM when no VFs are currently available for running a physical device. More particularly, a computer system may detect if a host system does not have any available VFs of SR-IOV NICs associated with a certain network and provide various alternative techniques to support the capacities of the VF on the host system. In accordance with the disclosure, the computer system may hot-plug the SR-IOV virtual function by identifying or otherwise creating a new VF to associate with the VM on the fly and without stopping the VM.
  • these alternative techniques may be selected by the system based on, for example, a determined set of system configuration data that is executed in response to the virtualization manager determining that there are no VFs currently available, user service level agreements (SLAs) or other types of system performance data, or the techniques may be provided for selection by a user, such as an administrator, via an interface of the virtualization manager.
  • SLAs user service level agreements
  • the computer system of the disclosure may create a new VF by increasing the number of VFs associated with the host system.
  • the computer system may compare the number of currently configured VFs for a SR-IOV NIC associated with the host system to a threshold number of VFs that the SR-IOV NIC can support.
  • the SR-IOV NIC may be configured to support only a certain threshold number of a maximum possible number of VFs for the device to achieve a certain performance level. If this threshold number for the SR-IOV NIC is not met, then the computer system may increase the number of currently configured VFs for that SR-IOV NIC to support the new VF.
  • the VM that needs the SR-IOV virtual function may be migrated to another host machine that has an available VF on the specified network.
  • the computer system may scan the network to discover another host that has a free VF and is capable of meeting the running requirements, such as system memory complicity, processor speed, etc., for executing the VM. Thereupon, the computer system issues a command to “live” migrate the VM to the identified host.
  • the VM may be coupled to a virtual bridge if there is no available VF on the network or if there is no available logical device on that network.
  • the virtual bridge may provide an Open Systems Interconnection (OSI) model layer 2 (data link layer) connectivity between two or more network segments.
  • OSI Open Systems Interconnection
  • the virtual bridge may be used to provide network connectivity to the VM.
  • the computer system may identify an available VF on another logical device on the network (or another logical device on which a new VF may be created). Then, the computer system may couple the VM to the virtual bridge to provide connectivity to the other logical device, thereby providing the requite network connectivity to the VM.
  • the computer system decouples the VM from the bridge and associates it with the newly available VF on the host. Still further, other options for hot-plugging VFs are possible by utilizing the techniques disclosed herein.
  • FIG. 1 is an example of networked system architecture 100 in which implementations of the disclosure can be implemented.
  • other architectures for the network architecture 100 are possible for implementing the techniques of the disclosure and are not necessarily limited to the specific architecture depicted by FIG. 1 .
  • the network architecture 100 includes a host controller 110 coupled to one or more host servers, such as host server 120 , over a network 125 .
  • the network 125 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combination thereof.
  • a public network e.g., the Internet
  • a private network e.g., a local area network (LAN) or wide area network (WAN)
  • a wired network e.g., Ethernet network
  • a wireless network e.g., an 802.11 network or a Wi-Fi network
  • a cellular network e.g., a
  • Host controller 110 may be an independent machine that includes one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases, etc.), networks, software components, and/or hardware components.
  • the host controller 110 may be part of the host server 120 .
  • the host controller 110 may include a virtualization manager 115 to manage the allocation of resources associated with the one or more host servers.
  • host server 120 may be part of a virtualization system.
  • Virtualization may be viewed as abstraction of some physical components into logical objects in order to allow running various software modules, for example, multiple operating systems, concurrently and in isolation from other software modules, on one or more interconnected physical computer systems. Virtualization allows, for example, consolidating multiple physical servers into one physical server running multiple VMs in order to improve the hardware utilization rate.
  • virtualization manager 115 may manage provisioning of a new VM, connection protocols between clients and VMs, user sessions (e.g., user authentication and verification, etc.), backup and restore, image management, virtual machine migration, load balancing, and so on.
  • the virtualization manager 115 may add a VM, delete a VM, balance the load on the host cluster, provide directory service to the VMs, and/or perform other management functions.
  • Host server 120 may comprise server computers or any other computing devices capable of running one or more virtual machines (VM) 130 A through 130 N.
  • Each VM may be a software implementation of a machine that executes programs as though it were an actual physical machine.
  • Each VM runs a guest operating system (OS) (not pictured) that may be different from one virtual machine to another.
  • the guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc.
  • Each VM may be linked to one or more virtual disks (not shown). These virtual disks can be logical partitions of a physical disk managed by hypervisor 140 , can be cloud based storage devices, or can be some other type of virtual storage device.
  • virtual disks may form a whole or part of a logical data center.
  • the VMs ( 130 A-N) and virtual disks, together with host server 120 may be collectively referred to as entities in a virtual machine system.
  • the host server 120 may comprise a hypervisor 140 that emulates the underlying hardware platform for the VMs ( 130 A-N).
  • the hypervisor 140 may also be known as a virtual machine monitor (VMM) or a kernel-based hypervisor.
  • VMM virtual machine monitor
  • Hypervisor 140 may take many forms. For example, hypervisor 140 may be part of or incorporated in a host operating system (not shown) of host server 120 , or hypervisor 140 may be running on top of the host operating system. Alternatively, hypervisor 140 may a “bare metal” hypervisor that runs on hardware of host server 120 without an intervening operating system.
  • hypervisors include quick emulator (QEMU), kernel mode virtual machine (KVM), virtual machine monitor (VMM), etc.
  • the hypervisor 140 may execute an agent 145 or another type of process that monitors the status of devices running on host server 120 .
  • agent 145 can monitor a runtime status of VMs ( 130 A-N), hardware configuration of devices, network and storage connectivity on the host server 120 , and similar VM-related and host-related information.
  • the agent 145 may store this information for later use. For example, Agent 145 may save this information in a local memory space. Alternatively, agent 145 may save the information to a data store 129 accessible by the virtualization manager 115 .
  • the data store 129 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data.
  • agent 145 can send, receive and store information regarding the VMs ( 130 A-N) via an interface 119 of the virtualization manager 115 .
  • Agent 145 may additionally provide and store information upon request from virtualization manager 115 relating to one or more network devices associated with the network architecture 100 , which may or may not support virtual function capabilities a further discussed below.
  • the networked system architecture 100 may comprise one or more devices, such as devices 150 A and 150 B, (e.g., a network interface device, an I/O device such as a CD-ROM driver, a disk array, etc.), which are available on the host server 120 .
  • the hypervisor 140 may use the devices, such as device 150 A, to implement virtual networking connectivity for the VMs ( 130 A-N), which allows the VMs to connect with network 125 .
  • device 150 A may be as a single-root I/O virtualization (SR-IOV)-enabled NIC that enables network traffic to flow directly between VMs ( 130 A-N) and the device 150 A.
  • SR-IOV single-root I/O virtualization
  • Hypervisor 140 may support the SR-IOV specification, which allows two or more VMs ( 130 A-N) to share a single physical device (e.g., device 150 A).
  • Hypervisor 140 may include a SR-IOV component interface 149 that provides SR-IOV specification support.
  • Virtual networking with an SR-IOV-enabled NIC may be referred to as supporting a pass-through mode for assigning I/O devices, such as SR-IOV NIC 150 -A, to VMs 130 A- 130 N.
  • the (SR-IOV) specification may indicate certain physical functions (PFs) 155 and virtual functions (VFs) 158 to be utilized by the VMs.
  • PFs 155 are full-featured Peripheral Component Interconnect Express (PCIe) devices that may include all configuration resources and capabilities for the I/O device.
  • VFs 158 are “lightweight” PCIe functions that contain the resources for data movement, but may have a minimized set of configuration resources. Each VF is derived from a corresponding PF.
  • PCIe Peripheral Component Interconnect Express
  • a plurality of VFs may be supported by a given logical device.
  • the number of VFs e.g., VFs 158
  • a given device e.g., device 150 A
  • a single Ethernet port may be mapped to multiple VFs that can be shared by one or more of the VMs 130 A- 130 N.
  • An I/O device such as a virtual NIC device (vNIC) 135 , associated with one of the VMs 130 A- 130 N may be provided via a VF, thus bypassing the virtual networking on the host in order to reduce the latency between the VMs 130 A- 130 N and the underlying SR-IOV NIC (e.g., device 150 A).
  • the SR-IOV interface component 149 of hypervisor 140 is used to detect and initialize PFs and VFs correctly and appropriately.
  • hypervisor 140 may support a “pass-through” mode for assigning one or more VFs 158 to the VMs 130 A- 130 N, by utilizing SR-IOV interface component 149 .
  • the SR-IOV interface component 149 may be used to map the configuration space of the VFs 158 to the guest memory address range associated with the VMs 130 A- 130 N.
  • each VF may be assigned to a single one of the VMs 130 A- 130 N, as VFs utilize real hardware resources.
  • a VM 130 A- 130 N may have multiple VFs assigned to it.
  • a VF appears as a network card on the VM in the same way as a normal network card would appear to an operating system.
  • VFs may exhibit a near-native performance and thus may provide better performance than para-virtualized drivers and emulated access. VFs may further provide data protection between VMs 130 A- 130 N on the same physical server as the data is managed and controlled by the hardware.
  • the virtualization manager 115 can receive a request to create a virtual machine with SR-IOV virtual function capabilities of a device.
  • the user may use interface 119 to mark vNIC 135 associated with VM 130 A as an SR-IOV virtual function “pass-through” device.
  • the request may comprise an identifier for a selected logical network to be associated with the vNIC 135 and an SR-IOV virtual function capability.
  • the identifier for the selected logical network may be a unique identifier (e.g., virtual local area network (VLAN) tag, network alias, or the like) that identifies a particular logical network available to the virtualization manager.
  • VLAN virtual local area network
  • the virtualization manager 115 may then determine whether there is an available SR-IOV virtual function associated with the requested logical network.
  • the virtualization manager 115 may include a VF management component 160 to communicate with hypervisors associated with each of the host servers 120 to determine whether there are any virtual functions available to be assigned. If there are no available virtual functions or they are otherwise unavailable, the VF management component 160 implements techniques of the disclosure to hot-plug VF capabilities into the networked system architecture 100 for use by the VM.
  • the functionally of the VF management component 160 can exist in a fewer or greater number of modules than what is shown, with such modules residing at one or more processing devices of networked system architecture 100 , which may be geographically dispersed.
  • the VF management component 160 may be operable in conjunction with virtualization manager 115 from which it may receive and determine relevant information for hot-plugging of VFs for use by the VMs 130 A-N as discussed in more detail below with respect to FIGS. 2 through 6 .
  • FIG. 2 illustrates an example a management computing system 200 including a memory 201 for hot-plugging of virtual functions in a virtualized environment in accordance with one or more aspects of the disclosure.
  • the management computing system 200 includes a processing device 203 operatively coupled to the memory 201 .
  • the processing device 203 may be provided by one or more processors, such as a general purpose processor, for executing instructions.
  • the memory 301 may include a volatile or non-volatile memory device, and other types of computer-readable medium, or combination thereof that is capable of storing relevant data related to the virtual functions and instructions for carrying out the operations of the management computing system 200 .
  • the memory 201 and processing device 203 may correspond to a memory and processing device within system architecture 100 of FIG. 1 .
  • host controller 110 and/or host server 120 of the networked system architecture 100 may comprise the memory 201 and processing device 203 or some combination thereof for hot-plugging virtual function as disclosed herein.
  • the processing device 203 may execute instructions stored in the memory for carrying out the operations of the modules as discussed herein.
  • management computing system 200 may include modules for hot-plugging virtual functions. These modules may include a VF identifier module 202 , a virtual function (VF) threshold comparison module 204 and a VF creator module 206 stored in memory 301 of the computing system 200 .
  • Virtualization manager 115 may use management computing system 200 to hot-plug a SR-IOV virtual function when no VFs are currently available on a logical network device (e.g., SR-IOV NIC) to associate with a VM.
  • a logical network device e.g., SR-IOV NIC
  • the virtualization manager 115 may receive a client request to configure a VM with SR-IOV virtual function capabilities (e.g., assign a virtual function to a vNIC 235 of VM 230 ) for connecting to a network for use with some client applications.
  • the virtualization manager 115 may execute the VF identifier module 202 to determine whether there is a virtual function available to support connective to the network specified in the request.
  • the VF identifier module 202 may identify a plurality of hypervisors associated with the virtualization manager 115 .
  • the virtualization manager 115 may maintain a reference identifier to the hypervisors, in memory, in a configuration file, in a data store 129 or in any similar storage device, as hypervisors are added for management in the networked system architecture 100 .
  • the VF identifier module 202 may examine each hypervisor from the pularity of hypervisors and determine whether there are any virtual function available to be assigned to the VM 230 that are associated with the requested network.
  • the module 202 may make the determination that all of the VFs are unavailable based on the availability status of the virtual functions, such as VFs 215 , for a logical network device (e.g., SR-IOV NIC 210 ) associated with each hypervisor of the plurality of hypervisors. For example, module 202 may send a request to an agent, such as agent 145 , executing on the hypervisors for the availability status of the virtual functions of logical network device associated with the hypervisor.
  • agent such as agent 145
  • the availability status may indicate whether the virtual function is available for assignment according to whether it has already been assigned to a virtual machine, whether it has been attached to some other type of device, or whether it is unavailable to be assigned for other reasons.
  • the availability status may be stored locally by the virtualization manager 115 in a data store 129 , in a memory space, or in any similar manner, and retrieve by the module 202 .
  • the management computing system 200 may provide a notification to the virtualization manager 115 of the availability status of the virtual function.
  • the virtualization manager 115 may assign the available virtual function to the vNIC 235 of VM 230 . Subsequently, the virtualization manager 115 may launch the VM 230 on the hypervisor associated with the available virtual function. If the VF identifier module 202 determines that there is no available virtual function or that the virtual functions to provide connectively to the network are unavailable, the management computing system 200 may attempt to increase the number available virtual functions for a logical network device that can provide support for the vNIC 235 of VM 230 .
  • the VF identifier module 202 identifies a logical network device that can support the network.
  • a network device detection component 203 of the module 202 may detect a logical network device (e.g., SR-IOV NIC 210 , being associated with a number of virtual functions (e.g., VFs 215 ) currently activated to support the network.
  • the network device detection component 203 examines the logical network devices associated the hypervisors managed by the virtualization manager 115 to identify those devices that can support the network.
  • the management computing system 200 may determine a possible maximum number of VFs for that device. For example, the maximum number of VFs for the device 210 may be coded into the device, for example, from a device vendor. The maximum number is compared to the VFs currently configured for the logical network device. If the VFs currently configured for a logical network device is not at the maximum number of VFs for that device, the management computing system 200 determines if the number of VFs currently configured for the logical network device is over a determined threshold.
  • VF threshold comparison module 204 may compare the number of the virtual functions currently configured for a logical network device to a threshold amount of virtual functions that may be coupled to that device. For example, the VF threshold comparison module 204 may compare the number of VFs 215 currently configured for device SR-IOV NIC 210 to a VF threshold 214 for that device.
  • the VF threshold 214 may be set in several ways. In some implementations, VF threshold 214 may be set by the virtualization manager 115 in view of network configuration settings provide by, for example, an administrative user. In this regard, the administrative user may set the VF threshold 214 via an interface, such as interface 119 , of the virtualization manager 115 .
  • the VF threshold 214 may be set to a value so that a total the number of VFs configured for the logical network device does not adversely impact the throughput of network traffic transferred over the device.
  • the VF creator module 206 may identify or create a new VF for the device. For example, the VF creator module 206 may increase the number of VFs 215 associated with SR-IOV NIC 210 . In order to increase the number of VFs 215 , the module 206 executes a VF hot-plugging component 208 to un-plug all of the VFs 215 from the vNICs using them.
  • the VF hot-plugging module 208 may send a request to the virtualization manager 115 so that the VFs 215 are released by the vNICs, such as vNIC 235 , assigned to the VMs, such as VM 230 .
  • the VF creator module 206 may increase the number of VFs 215 associated with SR-IOV NIC 210 .
  • the VF creator module 206 may adjust a value for a setting on the SR-IOV NIC 210 that indicates the number of VFs supported by this device.
  • the VF creator module 206 may adjust the value so that the SR-IOV NIC 210 can support one additional virtual function.
  • the VF creator module 206 may increase the number of VFs 215 associated with SR-IOV NIC 210 as need by the VMs so long as the increase is still below the VF threshold 214 .
  • the VF creator module 206 may limit an amount of the increase based on a determination of the service impact on network throughput associated with SR-IOV NIC 210 in view of the increase. This determination ensures that the increase in VFs does not surpass a certain threshold level that could severally impact network traffic transmission and receiving rates at the device.
  • the VF hot-plugging component 208 re-plugs all of the VFs 215 back that it un-plugged.
  • the module 208 may send a request to the virtualization manager 115 to reassign the VFs of the device to vNICs that were previously released.
  • the virtual manager 115 then assigns the newly available VF to the vNIC 235 of the newly created VM 230 .
  • the virtual manager 115 may provide a notification of an availability of the new vNIC 235 associated with the VM 230 , for example, to a client via the interface 119 .
  • FIG. 3 depicts another view 300 of the example networked system architecture 100 of FIG. 1 in accordance with one or more aspects of the disclosure.
  • the network architecture 100 includes a host controller 110 coupled to one or more host servers, such as host server 120 and host server 380 , over a network 125 .
  • the host controller 110 includes virtualization manager 115 to manage the allocation of resources of hypervisors 140 , 390 associated with each of the host servers 120 , 380 .
  • the hypervisors 140 and 380 may use a SR-IOV enabled NIC device to implement virtual networking connectivity for the VMs executed by hypervisors, which allows the VMs to connect with network 125 .
  • hypervisor 140 may associate a virtual device, such as vNIC 335 , of VM 332 to certain virtual functions (VFs) and physical functions (PFs) of the device to be utilized by the VM 332 .
  • VFs virtual functions
  • PFs physical functions
  • VF management component 170 of the virtualization manager may couple VM 332 to virtual bridge 345 until a VF is available.
  • the virtual bridge 345 may be used to provide network connectivity to VM 332 . Once a VF becomes available the VF management component 170 may decouple the VM 332 from virtual bridge 345 and recouple it to the newly available VF.
  • hypervisor 140 may implement the virtual bridge 345 .
  • the virtual bridge 345 is a component (e.g., computer-readable instructions implementing software) used to unite two or more network segments.
  • the virtual bridge 345 behaves similar to a virtual network switch, working transparently, such that virtual machines that are connected to the bridge 345 do not know about its existence. Both real and virtual devices of the VMs may be connected to the virtual bridge 345 .
  • the hypervisor 140 may provide the functionality of the virtual bridge 345 by connecting the VMs, such as VM 332 , that the hypervisor manages to the bridge.
  • the VF management component 170 may scan network 125 to identify anther host, such a second host server 380 that has an available VF 355 on the network 125 .
  • the VF management component 170 may execute the network device detection component 203 of FIG. 2 to identify logical network device 350 that has the available VF 355 .
  • the VF identifier module 202 may determine whether there is a virtual function available to support connective to the network 125 by scanning the hypervisors being managed by the virtualization manager 115 .
  • the VF management component 170 may also determine whether a second hypervisor 390 on a second host machine 380 meets the running requirements to be able to execute the VM 332 .
  • the VF management component 170 may hot-plug an available VF 355 into a device 350 associated with the second hypervisor 390 using, for example, the management computing system 200 of FIG. 2 .
  • the VF management component 170 couple VM 332 to the virtual bridge that is coupled to logical network device 350 in order to provide network connectivity to VM 332 .
  • the VF management component 170 monitors the host server 110 to determine if a virtual function becomes available.
  • the VF management component 170 may monitor the availability status of the virtual functions associated with the host server 110 .
  • the VF management component 170 may receive the availability status for each virtual function from the virtualization manager 115 .
  • the availability status may be stored locally by the virtualization manager 115 in a data store 129 , in a memory space, or in any similar storage device.
  • the management component 170 monitors the VMs that operate on hypervisor 140 for utilization change associated with the virtual functions. For example, the management component 170 may determine whether the hypervisor 140 has stopped one or more of the VMs, migrated the VMs to a new hypervisor, unplug a NIC for the VMs, etc. In response to the monitoring, the management component 170 may detect that a newly available virtual function associated with the first host machine. In such as case, the management component 170 decouples VM 332 from the virtual bridge 345 by unplugging vNIC 335 from the bridge. Thereafter, the management component 170 associates or otherwise assigns the vNIC 335 of the VM 332 to the newly available virtual function of host server 110 .
  • the VF management component 170 may issue a request to the virtualization manager 115 to “live” migrate the VM 322 from host server 110 to the second host server 380 with the available VF.
  • Live migration herein refers to the process of moving a running virtual machine from an origin host computer system to a destination host computer system without disrupting the guest operating system and/or the applications executed by the virtual machine.
  • the virtualization manager 115 may copy a portion of an execution state of the virtual machine being migrated from host server 110 to destination host server 180 while the virtual machine is still running at the origin host. Upon completing the copy, the virtualization manager 115 stops virtual machine 322 at the first hypervisor 140 and re-starts the virtual machine 322 at the second hypervisor 390 . Thereupon, the virtualization manager 115 assigns an available VF 355 of the second hypervisor 390 to vNIC 335 and sends a notification of an availability of vNIC 335 , which indicates the availability status of the available VF 355 for accessing network 125 .
  • FIG. 4 depicts a flow diagram of a method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • the processing device 203 of FIG. 2 as direct by the VF management component 160 of FIG. 1 may implement method 400 .
  • the method 400 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (e.g., software executed by a general purpose computer system or a dedicated machine), or a combination of both.
  • some or all of the method 400 may be performed by other components of a shared storage system.
  • the blocks depicted in FIG. 4 can be performed simultaneously or in a different order than that depicted.
  • Method 400 begins at block 410 where a request to configure a virtual device of a virtual machine associated with a host machine with access to a specified network is received.
  • a determination is made that virtual functions to the specified network are unavailable on the host machine.
  • An available virtual function associated with a logical network device is identified in block 430 over the specified network.
  • the virtual machine is coupled to a virtual bridge coupled to the second logical network device.
  • the virtual device of the virtual machine is associated with the available virtual function.
  • FIG. 5 depicts a flow diagram of another method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • the processing device 203 of FIG. 2 as direct by the VF management component 160 of FIG. 1 may implement method 500 .
  • the method 500 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (e.g., software executed by a general purpose computer system or a dedicated machine), or a combination of both.
  • some or all of the method 500 may be performed by other components of a shared storage system.
  • the blocks depicted in FIG. 5 can be performed simultaneously or in a different order than that depicted.
  • Method 500 begins at block 510 where a determination is made that virtual functions are unavailable for a virtual machine running on a first host computer system.
  • the virtual functions are associated with a specified network.
  • an available virtual function for the specified network associated with is identified on a second host computer system.
  • the virtual machine is migrated from the first host computer system to the second host computer system.
  • an instruction to associate a virtual device of the virtual machine with the available virtual function associated with the specified network is issued in block 540 .
  • FIG. 6 depicts a block diagram of a computer system operating in accordance with one or more aspects of the disclosure.
  • computer system 600 may correspond to a processing device within system 200 of FIG. 2 .
  • the computer system may be included within a data center that supports virtualization. Virtualization within a data center results in a physical system being virtualized using virtual machines to consolidate the data center infrastructure and increase operational efficiencies.
  • a virtual machine (VM) may be a program-based emulation of computer hardware.
  • the VM may operate based on computer architecture and functions of computer hardware resources associated with hard disks or other such memory.
  • the VM may emulate a physical computing environment, but requests for a hard disk or memory may be managed by a virtualization layer of a host machine/host device to translate these requests to the underlying physical computing hardware resources. This type of virtualization results in multiple VMs sharing physical resources.
  • computer system 600 may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems.
  • Computer system 600 may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment.
  • Computer system 600 may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • server a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device.
  • the term “computer”
  • the computer system 600 may include a processing device 602 , a volatile memory 604 (e.g., random access memory (RAM)), a non-volatile memory 606 (e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a data storage domain 616 , which may communicate with each other via a bus 608 .
  • volatile memory 604 e.g., random access memory (RAM)
  • non-volatile memory 606 e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)
  • EEPROM electrically-erasable programmable ROM
  • Processing device 602 may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
  • CISC complex instruction set computing
  • RISC reduced instruction set computing
  • VLIW very long instruction word
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • Computer system 600 may further include a network interface device 622 .
  • Computer system 600 also may include a video display unit 610 (e.g., an LCD), an alphanumeric input device 612 (e.g., a keyboard), a cursor control device 614 (e.g., a mouse), and a signal generation device 620 .
  • a video display unit 610 e.g., an LCD
  • an alphanumeric input device 612 e.g., a keyboard
  • a cursor control device 614 e.g., a mouse
  • signal generation device 620 e.g., a signal generation device.
  • Data storage domain 616 may include a non-transitory computer-readable storage medium 624 on which may store instructions 626 encoding any one or more of the methods or functions described herein, including instructions for implementing method 400 of FIG. 4 or method 500 of FIG. 5 for hot-plugging of virtual functions.
  • Instructions 626 may also reside, completely or partially, within volatile memory 604 and/or within processing device 602 during execution thereof by computer system 600 , hence, volatile memory 604 and processing device 602 may also constitute machine-readable storage media.
  • non-transitory computer-readable storage medium 624 is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions.
  • the term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer that cause the computer to perform any one or more of the methods described herein.
  • the term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • the methods, components, and features described herein may be implemented by discrete hardware components or may be integrated in the functionality of other hardware components such as ASICS, FPGAs, DSPs or similar devices.
  • firmware modules or functional circuitry within hardware devices may implement the methods, components, and features of the disclosure.
  • the methods, components, and features may be implemented in any combination of hardware devices and computer program components, or in computer programs.
  • terms such as “selecting,” “determining,” “adjusting,” “comparing,” “identifying,” “associating,” “monitoring,” “migrating,” “issuing,” “plugging,” “un-plugging” or the like refer to actions and processes performed or implemented by computer systems that manipulates and transforms data represented as physical (electronic) quantities within the computer system registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not have an ordinal meaning according to their numerical designation.
  • Examples described herein also relate to an apparatus for performing the methods described herein.
  • This apparatus may be specially constructed for performing the methods described herein, or it may comprise a general purpose computer system selectively programmed by a computer program stored in the computer system.
  • a computer program may be stored in a computer-readable tangible storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Stored Programmes (AREA)

Abstract

Implementations of the disclosure provide for hot-plugging of virtual functions in a virtualized environment. In one implementation, a computer system determines that virtual functions associated with a logical network for a virtual machine hosted on a first host system are unavailable on the first host system, identifies a logical network device on a second host system that is communicably accessible from the first host system, and determines that the logical network device on the second host system has a number of available virtual functions associated with the logical network. The computer system then migrates the virtual machine from the first host computer system to the second host computer system to allow the virtual machine to access the number of available virtual functions associated with the logical network on the second host system and associates a virtual device of the virtual machine with the number of available virtual functions.

Description

    RELATED APPLICATIONS
  • This application is a continuation of co-pending U.S. patent application Ser. No. 15/239,172, filed on Aug. 17, 2016, the entire contents of which are hereby incorporated by reference herein.
  • TECHNICAL FIELD
  • The disclosure is generally related to network devices, and is more specifically related to hot-plugging of virtual functions in a virtualized environment.
  • BACKGROUND
  • A virtual machine (VM) is a portion of software that, when executed on appropriate hardware, creates an environment allowing for the virtualization of an actual physical computer system (e.g., a server, a mainframe computer, etc.). The actual physical computer system is typically referred to as a “host machine,” and the operating system (OS) of the host machine is typically referred to as the “host operating system.” The operating system (OS) of the virtual machine is typically referred to as the “guest operating system.”
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The disclosure is illustrated by way of examples, and not by way of limitation, and may be more fully understood with references to the following detailed description when considered in connection with the figures, in which:
  • FIG. 1 depicts a block diagram of an example of networked system architecture for hot-plugging of virtual functions in a virtualized environment in accordance with one or more aspects of the disclosure.
  • FIG. 2 illustrates an example functional a management computing system including a memory for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • FIG. 3 depicts another view of the example networked system architecture of FIG. 1 in accordance with one or more aspects of the disclosure.
  • FIG. 4 depicts a flow diagram of a method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • FIG. 5 depicts a flow diagram of another method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure.
  • FIG. 6 depicts a block diagram of an illustrative computing device operating in accordance with the examples of the disclosure.
  • DETAILED DESCRIPTION
  • The disclosure provides techniques for hot-plugging of virtual functions in a virtual environment. In a virtualized environment, a virtual machine (VM) may comprise one or more “virtual devices,” each of which may be associated with a physical device (e.g., a network interface device, an I/O device such as a CD-ROM drive, a disk array, etc.) of a host machine. A virtualization management system, or “virtualization manager,” can manage the allocation of the host resources to the VMs, monitor the status of the VMs, as well as the progress of commands and processes being executed by the VMs, and generally manage operations in the system.
  • A network interface controller (NIC) is a network hardware device that connects the host machine to a network. The NIC may include electronic circuitry used to communicate via a physical layer and data link layer standard, such as Ethernet, Fibre Channel, Wi-Fi, or Token Ring, to name a few examples. The NIC may implement the Open Systems Interconnection (OSI) layer 1 (physical layer) and OSI layer 2 (data link layer) standards, thus providing physical access to a networking medium and a low-level addressing system using media access control (MAC) addresses, in order to allow computing devices to communicate over a wired or wireless network.
  • Single Root I/O Virtualization (SR-IOV) is a virtualization networking specification that allows a physical device, such as a NIC, to isolate access to its PCI Express (PCIe) hardware resources for manageability and performance reasons. For example, the SR-IOV allows different VMs in a virtual environment to share a single PCI Express hardware interface without interfering with each other. These hardware resources may include physical functions and virtual functions. Physical functions (PFs) are full-featured PCIe devices that include all configuration resources and SR-IOV capabilities for the device. Virtual functions (VFs) are “lightweight” PCIe functions that contain the resources necessary for data movement, but have a minimized set of configuration resources.
  • A physical NIC on a host machine that is configured based on SR-IOV specifications (also referred to herein as a logical network device) can enable network traffic to flow directly between the VMs that share the same host and VFs of the SR-IOV-enabled NIC. For example, a SR-IOV-enabled NIC may be referred to as supporting a pass-through mode for assigning I/O devices to VMs. In some implementations, a virtual device, such as a virtual NIC (vNIC), associated with a virtual machine can be connected directly (e.g., as opposed to being connect via the virtualization layer of the host machine) to a virtual function using the SR-IOV. This may reduce data transfer latency between the VM and a physical device due to various issues, such as a virtualization layer associated with a host machine or lower CPU utilization devoted to data packet transfers.
  • In some situations, the virtualization manager may create a VM with SR-IOV virtual function capabilities (e.g., assign a virtual function to the vNIC of the VM). For example, the VM may be used to run client applications. In some implementations, the virtualization manager may not be able to identify an available VF of a SR-IOV enabled NIC associated with a certain network for assigning to the VM. For example, all current VFs associated with a host system may be already attached to other VMs. In such situations, the virtualization manager may generate an error or other types of indicators that there are no available VF's on the system, which can block or otherwise adversely affect the client applications that intended to use the VM.
  • Aspects of the present disclosure can provide the ability for a virtualization manager to hot-plug a SR-IOV virtual function for a VM when no VFs are currently available for running a physical device. More particularly, a computer system may detect if a host system does not have any available VFs of SR-IOV NICs associated with a certain network and provide various alternative techniques to support the capacities of the VF on the host system. In accordance with the disclosure, the computer system may hot-plug the SR-IOV virtual function by identifying or otherwise creating a new VF to associate with the VM on the fly and without stopping the VM. In some implementations, these alternative techniques may be selected by the system based on, for example, a determined set of system configuration data that is executed in response to the virtualization manager determining that there are no VFs currently available, user service level agreements (SLAs) or other types of system performance data, or the techniques may be provided for selection by a user, such as an administrator, via an interface of the virtualization manager.
  • In one such technique for the virtual manager to provide a VF associated with a certain network, the computer system of the disclosure may create a new VF by increasing the number of VFs associated with the host system. In some implementations, the computer system may compare the number of currently configured VFs for a SR-IOV NIC associated with the host system to a threshold number of VFs that the SR-IOV NIC can support. For example, the SR-IOV NIC may be configured to support only a certain threshold number of a maximum possible number of VFs for the device to achieve a certain performance level. If this threshold number for the SR-IOV NIC is not met, then the computer system may increase the number of currently configured VFs for that SR-IOV NIC to support the new VF.
  • In another such technique, when all VFs are allocated and new VFs cannot be created, the VM that needs the SR-IOV virtual function may be migrated to another host machine that has an available VF on the specified network. For example, the computer system may scan the network to discover another host that has a free VF and is capable of meeting the running requirements, such as system memory complicity, processor speed, etc., for executing the VM. Thereupon, the computer system issues a command to “live” migrate the VM to the identified host.
  • In yet another technique for the virtualization manager to provide a VF associated with a certain network, the VM may be coupled to a virtual bridge if there is no available VF on the network or if there is no available logical device on that network. The virtual bridge may provide an Open Systems Interconnection (OSI) model layer 2 (data link layer) connectivity between two or more network segments. For example, the virtual bridge may be used to provide network connectivity to the VM. In some implementations, the computer system may identify an available VF on another logical device on the network (or another logical device on which a new VF may be created). Then, the computer system may couple the VM to the virtual bridge to provide connectivity to the other logical device, thereby providing the requite network connectivity to the VM. If a VF becomes available on the host associated with the VM, the computer system decouples the VM from the bridge and associates it with the newly available VF on the host. Still further, other options for hot-plugging VFs are possible by utilizing the techniques disclosed herein.
  • FIG. 1 is an example of networked system architecture 100 in which implementations of the disclosure can be implemented. In some implementations, other architectures for the network architecture 100 are possible for implementing the techniques of the disclosure and are not necessarily limited to the specific architecture depicted by FIG. 1.
  • As shown in FIG. 1, the network architecture 100 includes a host controller 110 coupled to one or more host servers, such as host server 120, over a network 125. The network 125 may include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), a wired network (e.g., Ethernet network), a wireless network (e.g., an 802.11 network or a Wi-Fi network), a cellular network (e.g., a Long Term Evolution (LTE) network), routers, hubs, switches, server computers, and/or a combination thereof. Host controller 110 may be an independent machine that includes one or more computing devices (such as a rackmount server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memories, databases, etc.), networks, software components, and/or hardware components. Alternatively, the host controller 110 may be part of the host server 120.
  • In some implementations, the host controller 110 may include a virtualization manager 115 to manage the allocation of resources associated with the one or more host servers. In one implementation, host server 120 may be part of a virtualization system. Virtualization may be viewed as abstraction of some physical components into logical objects in order to allow running various software modules, for example, multiple operating systems, concurrently and in isolation from other software modules, on one or more interconnected physical computer systems. Virtualization allows, for example, consolidating multiple physical servers into one physical server running multiple VMs in order to improve the hardware utilization rate. In some implementations, virtualization manager 115 may manage provisioning of a new VM, connection protocols between clients and VMs, user sessions (e.g., user authentication and verification, etc.), backup and restore, image management, virtual machine migration, load balancing, and so on. The virtualization manager 115 may add a VM, delete a VM, balance the load on the host cluster, provide directory service to the VMs, and/or perform other management functions.
  • Host server 120 may comprise server computers or any other computing devices capable of running one or more virtual machines (VM) 130A through 130N. Each VM may be a software implementation of a machine that executes programs as though it were an actual physical machine. Each VM runs a guest operating system (OS) (not pictured) that may be different from one virtual machine to another. The guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc. Each VM may be linked to one or more virtual disks (not shown). These virtual disks can be logical partitions of a physical disk managed by hypervisor 140, can be cloud based storage devices, or can be some other type of virtual storage device. In one embodiment, virtual disks may form a whole or part of a logical data center. In one embodiment, the VMs (130A-N) and virtual disks, together with host server 120, may be collectively referred to as entities in a virtual machine system.
  • The host server 120 may comprise a hypervisor 140 that emulates the underlying hardware platform for the VMs (130A-N). The hypervisor 140 may also be known as a virtual machine monitor (VMM) or a kernel-based hypervisor. Hypervisor 140 may take many forms. For example, hypervisor 140 may be part of or incorporated in a host operating system (not shown) of host server 120, or hypervisor 140 may be running on top of the host operating system. Alternatively, hypervisor 140 may a “bare metal” hypervisor that runs on hardware of host server 120 without an intervening operating system. Some examples of hypervisors include quick emulator (QEMU), kernel mode virtual machine (KVM), virtual machine monitor (VMM), etc.
  • The hypervisor 140 may execute an agent 145 or another type of process that monitors the status of devices running on host server 120. For example, agent 145 can monitor a runtime status of VMs (130A-N), hardware configuration of devices, network and storage connectivity on the host server 120, and similar VM-related and host-related information. As the virtualization manager 115 collects the information, the agent 145 may store this information for later use. For example, Agent 145 may save this information in a local memory space. Alternatively, agent 145 may save the information to a data store 129 accessible by the virtualization manager 115.
  • In some implementations, the data store 129 may be a memory (e.g., random access memory), a cache, a drive (e.g., a hard drive), a flash drive, a database system, or another type of component or device capable of storing data. In one implementation, agent 145 can send, receive and store information regarding the VMs (130A-N) via an interface 119 of the virtualization manager 115. Agent 145 may additionally provide and store information upon request from virtualization manager 115 relating to one or more network devices associated with the network architecture 100, which may or may not support virtual function capabilities a further discussed below.
  • In some implementations, the networked system architecture 100 may comprise one or more devices, such as devices 150A and 150B, (e.g., a network interface device, an I/O device such as a CD-ROM driver, a disk array, etc.), which are available on the host server 120. In some implementations, the hypervisor 140 may use the devices, such as device 150A, to implement virtual networking connectivity for the VMs (130A-N), which allows the VMs to connect with network 125. For example, device 150A may be as a single-root I/O virtualization (SR-IOV)-enabled NIC that enables network traffic to flow directly between VMs (130A-N) and the device 150A. Hypervisor 140 may support the SR-IOV specification, which allows two or more VMs (130A-N) to share a single physical device (e.g., device 150A). Hypervisor 140 may include a SR-IOV component interface 149 that provides SR-IOV specification support. Virtual networking with an SR-IOV-enabled NIC may be referred to as supporting a pass-through mode for assigning I/O devices, such as SR-IOV NIC 150-A, to VMs 130A-130N.
  • When the hypervisor 140 associates SR-IOV NIC 150A to the VMs 130A-130N, the (SR-IOV) specification may indicate certain physical functions (PFs) 155 and virtual functions (VFs) 158 to be utilized by the VMs. PFs 155 are full-featured Peripheral Component Interconnect Express (PCIe) devices that may include all configuration resources and capabilities for the I/O device. VFs 158 are “lightweight” PCIe functions that contain the resources for data movement, but may have a minimized set of configuration resources. Each VF is derived from a corresponding PF. Although for simplicity only one physical function 155 and virtual function 158 is depicted, in other implementations, multiple physical functions 155 and virtual functions 158 may be present.
  • A plurality of VFs may be supported by a given logical device. In some implementations, the number of VFs (e.g., VFs 158) that may be supported by a given device (e.g., device 150A) may be limited by the underlying hardware of the device. In an illustrative example, a single Ethernet port may be mapped to multiple VFs that can be shared by one or more of the VMs 130A-130N. An I/O device, such as a virtual NIC device (vNIC) 135, associated with one of the VMs 130A-130N may be provided via a VF, thus bypassing the virtual networking on the host in order to reduce the latency between the VMs 130A-130N and the underlying SR-IOV NIC (e.g., device 150A). The SR-IOV interface component 149 of hypervisor 140 is used to detect and initialize PFs and VFs correctly and appropriately.
  • In some implementations, hypervisor 140 may support a “pass-through” mode for assigning one or more VFs 158 to the VMs 130A-130N, by utilizing SR-IOV interface component 149. For example, the SR-IOV interface component 149 may be used to map the configuration space of the VFs 158 to the guest memory address range associated with the VMs 130A-130N. In one implementation, each VF may be assigned to a single one of the VMs 130A-130N, as VFs utilize real hardware resources. In some cases, a VM 130A-130N may have multiple VFs assigned to it. A VF appears as a network card on the VM in the same way as a normal network card would appear to an operating system. VFs may exhibit a near-native performance and thus may provide better performance than para-virtualized drivers and emulated access. VFs may further provide data protection between VMs 130A-130N on the same physical server as the data is managed and controlled by the hardware.
  • In implementations of the disclosure, the virtualization manager 115 can receive a request to create a virtual machine with SR-IOV virtual function capabilities of a device. For example, the user may use interface 119 to mark vNIC 135 associated with VM 130A as an SR-IOV virtual function “pass-through” device. The request may comprise an identifier for a selected logical network to be associated with the vNIC 135 and an SR-IOV virtual function capability. The identifier for the selected logical network may be a unique identifier (e.g., virtual local area network (VLAN) tag, network alias, or the like) that identifies a particular logical network available to the virtualization manager.
  • The virtualization manager 115 may then determine whether there is an available SR-IOV virtual function associated with the requested logical network. For example, the virtualization manager 115 may include a VF management component 160 to communicate with hypervisors associated with each of the host servers 120 to determine whether there are any virtual functions available to be assigned. If there are no available virtual functions or they are otherwise unavailable, the VF management component 160 implements techniques of the disclosure to hot-plug VF capabilities into the networked system architecture 100 for use by the VM. The functionally of the VF management component 160 can exist in a fewer or greater number of modules than what is shown, with such modules residing at one or more processing devices of networked system architecture 100, which may be geographically dispersed. The VF management component 160 may be operable in conjunction with virtualization manager 115 from which it may receive and determine relevant information for hot-plugging of VFs for use by the VMs 130A-N as discussed in more detail below with respect to FIGS. 2 through 6.
  • FIG. 2 illustrates an example a management computing system 200 including a memory 201 for hot-plugging of virtual functions in a virtualized environment in accordance with one or more aspects of the disclosure. In this example, the management computing system 200 includes a processing device 203 operatively coupled to the memory 201. In some implementations, the processing device 203 may be provided by one or more processors, such as a general purpose processor, for executing instructions. The memory 301 may include a volatile or non-volatile memory device, and other types of computer-readable medium, or combination thereof that is capable of storing relevant data related to the virtual functions and instructions for carrying out the operations of the management computing system 200.
  • In some implementations, the memory 201 and processing device 203 may correspond to a memory and processing device within system architecture 100 of FIG. 1. For example, host controller 110 and/or host server 120 of the networked system architecture 100 may comprise the memory 201 and processing device 203 or some combination thereof for hot-plugging virtual function as disclosed herein. The processing device 203 may execute instructions stored in the memory for carrying out the operations of the modules as discussed herein.
  • As shown in FIG. 2, management computing system 200 may include modules for hot-plugging virtual functions. These modules may include a VF identifier module 202, a virtual function (VF) threshold comparison module 204 and a VF creator module 206 stored in memory 301 of the computing system 200. Virtualization manager 115 may use management computing system 200 to hot-plug a SR-IOV virtual function when no VFs are currently available on a logical network device (e.g., SR-IOV NIC) to associate with a VM. In some implementations, the virtualization manager 115 may receive a client request to configure a VM with SR-IOV virtual function capabilities (e.g., assign a virtual function to a vNIC 235 of VM 230) for connecting to a network for use with some client applications. In response to the request, the virtualization manager 115 may execute the VF identifier module 202 to determine whether there is a virtual function available to support connective to the network specified in the request.
  • To determine whether there are any virtual functions available for running the vNIC 235, the VF identifier module 202 may identify a plurality of hypervisors associated with the virtualization manager 115. For example, the virtualization manager 115 may maintain a reference identifier to the hypervisors, in memory, in a configuration file, in a data store 129 or in any similar storage device, as hypervisors are added for management in the networked system architecture 100. The VF identifier module 202 may examine each hypervisor from the pularity of hypervisors and determine whether there are any virtual function available to be assigned to the VM 230 that are associated with the requested network.
  • In some implementations, the module 202 may make the determination that all of the VFs are unavailable based on the availability status of the virtual functions, such as VFs 215, for a logical network device (e.g., SR-IOV NIC 210) associated with each hypervisor of the plurality of hypervisors. For example, module 202 may send a request to an agent, such as agent 145, executing on the hypervisors for the availability status of the virtual functions of logical network device associated with the hypervisor. The availability status may indicate whether the virtual function is available for assignment according to whether it has already been assigned to a virtual machine, whether it has been attached to some other type of device, or whether it is unavailable to be assigned for other reasons. The availability status may be stored locally by the virtualization manager 115 in a data store 129, in a memory space, or in any similar manner, and retrieve by the module 202.
  • If the VF identifier module 202 determines that there is an available virtual function, the management computing system 200 may provide a notification to the virtualization manager 115 of the availability status of the virtual function. The virtualization manager 115 may assign the available virtual function to the vNIC 235 of VM 230. Subsequently, the virtualization manager 115 may launch the VM 230 on the hypervisor associated with the available virtual function. If the VF identifier module 202 determines that there is no available virtual function or that the virtual functions to provide connectively to the network are unavailable, the management computing system 200 may attempt to increase the number available virtual functions for a logical network device that can provide support for the vNIC 235 of VM 230.
  • To increase the number available virtual functions, the VF identifier module 202 identifies a logical network device that can support the network. For example, a network device detection component 203 of the module 202 may detect a logical network device (e.g., SR-IOV NIC 210, being associated with a number of virtual functions (e.g., VFs 215) currently activated to support the network. In some implementations, the network device detection component 203 examines the logical network devices associated the hypervisors managed by the virtualization manager 115 to identify those devices that can support the network.
  • For each such logical network device discovered by the network device detection component 203, the management computing system 200 may determine a possible maximum number of VFs for that device. For example, the maximum number of VFs for the device 210 may be coded into the device, for example, from a device vendor. The maximum number is compared to the VFs currently configured for the logical network device. If the VFs currently configured for a logical network device is not at the maximum number of VFs for that device, the management computing system 200 determines if the number of VFs currently configured for the logical network device is over a determined threshold.
  • VF threshold comparison module 204 may compare the number of the virtual functions currently configured for a logical network device to a threshold amount of virtual functions that may be coupled to that device. For example, the VF threshold comparison module 204 may compare the number of VFs 215 currently configured for device SR-IOV NIC 210 to a VF threshold 214 for that device. The VF threshold 214 may be set in several ways. In some implementations, VF threshold 214 may be set by the virtualization manager 115 in view of network configuration settings provide by, for example, an administrative user. In this regard, the administrative user may set the VF threshold 214 via an interface, such as interface 119, of the virtualization manager 115. The VF threshold 214 may be set to a value so that a total the number of VFs configured for the logical network device does not adversely impact the throughput of network traffic transferred over the device.
  • If the total the number of VFs configured for the logical network device does not meet the VF threshold 214 for that device in view of the comparison, the VF creator module 206 may identify or create a new VF for the device. For example, the VF creator module 206 may increase the number of VFs 215 associated with SR-IOV NIC 210. In order to increase the number of VFs 215, the module 206 executes a VF hot-plugging component 208 to un-plug all of the VFs 215 from the vNICs using them. For example, the VF hot-plugging module 208 may send a request to the virtualization manager 115 so that the VFs 215 are released by the vNICs, such as vNIC 235, assigned to the VMs, such as VM 230.
  • After the VFs 215 are unplugged, the VF creator module 206 may increase the number of VFs 215 associated with SR-IOV NIC 210. For example, the VF creator module 206 may adjust a value for a setting on the SR-IOV NIC 210 that indicates the number of VFs supported by this device. In one implementation, the VF creator module 206 may adjust the value so that the SR-IOV NIC 210 can support one additional virtual function. In some implementations, the VF creator module 206 may increase the number of VFs 215 associated with SR-IOV NIC 210 as need by the VMs so long as the increase is still below the VF threshold 214. In some cases, the VF creator module 206 may limit an amount of the increase based on a determination of the service impact on network throughput associated with SR-IOV NIC 210 in view of the increase. This determination ensures that the increase in VFs does not surpass a certain threshold level that could severally impact network traffic transmission and receiving rates at the device.
  • Once the VFs 215 are increased to support a new VF, the VF hot-plugging component 208 re-plugs all of the VFs 215 back that it un-plugged. For example, the module 208 may send a request to the virtualization manager 115 to reassign the VFs of the device to vNICs that were previously released. The virtual manager 115 then assigns the newly available VF to the vNIC 235 of the newly created VM 230. Thereafter, the virtual manager 115 may provide a notification of an availability of the new vNIC 235 associated with the VM 230, for example, to a client via the interface 119.
  • FIG. 3 depicts another view 300 of the example networked system architecture 100 of FIG. 1 in accordance with one or more aspects of the disclosure. As discussed above, the network architecture 100 includes a host controller 110 coupled to one or more host servers, such as host server 120 and host server 380, over a network 125. In this regard, the host controller 110 includes virtualization manager 115 to manage the allocation of resources of hypervisors 140, 390 associated with each of the host servers 120, 380. In some implementations, the hypervisors 140 and 380 may use a SR-IOV enabled NIC device to implement virtual networking connectivity for the VMs executed by hypervisors, which allows the VMs to connect with network 125. For example, hypervisor 140 may associate a virtual device, such as vNIC 335, of VM 332 to certain virtual functions (VFs) and physical functions (PFs) of the device to be utilized by the VM 332.
  • In some situations, there may not be any free VFs to assign to VM 332 because they are all currently in use by other devices or there may not be a logical device on host server 110 associated with VM 332. In such situations, aspects of the present disclosure can provide the ability for the virtualization manager 115 to hot-plug a SR-IOV virtual function for use by the VM 332. In one example, the VF management component 170 of the virtualization manager may couple VM 332 to virtual bridge 345 until a VF is available. The virtual bridge 345 may be used to provide network connectivity to VM 332. Once a VF becomes available the VF management component 170 may decouple the VM 332 from virtual bridge 345 and recouple it to the newly available VF.
  • In some implementations, hypervisor 140 may implement the virtual bridge 345. The virtual bridge 345 is a component (e.g., computer-readable instructions implementing software) used to unite two or more network segments. The virtual bridge 345 behaves similar to a virtual network switch, working transparently, such that virtual machines that are connected to the bridge 345 do not know about its existence. Both real and virtual devices of the VMs may be connected to the virtual bridge 345. In some implementations, the hypervisor 140 may provide the functionality of the virtual bridge 345 by connecting the VMs, such as VM 332, that the hypervisor manages to the bridge.
  • In some implementations, the VF management component 170 may scan network 125 to identify anther host, such a second host server 380 that has an available VF 355 on the network 125. For example, the VF management component 170 may execute the network device detection component 203 of FIG. 2 to identify logical network device 350 that has the available VF 355. As noted above, the VF identifier module 202 may determine whether there is a virtual function available to support connective to the network 125 by scanning the hypervisors being managed by the virtualization manager 115. In some implementations, the VF management component 170 may also determine whether a second hypervisor 390 on a second host machine 380 meets the running requirements to be able to execute the VM 332. In one implementation, the VF management component 170 may hot-plug an available VF 355 into a device 350 associated with the second hypervisor 390 using, for example, the management computing system 200 of FIG. 2.
  • Once the logical network device 350 is identified, the VF management component 170 couple VM 332 to the virtual bridge that is coupled to logical network device 350 in order to provide network connectivity to VM 332. In some implementations, after the VM 332 is coupled to virtual bridge 345, the VF management component 170 monitors the host server 110 to determine if a virtual function becomes available. In an illustrative example, the VF management component 170 may monitor the availability status of the virtual functions associated with the host server 110. In some implementations, the VF management component 170 may receive the availability status for each virtual function from the virtualization manager 115. As noted above, the availability status may be stored locally by the virtualization manager 115 in a data store 129, in a memory space, or in any similar storage device.
  • In other implementations, the management component 170 monitors the VMs that operate on hypervisor 140 for utilization change associated with the virtual functions. For example, the management component 170 may determine whether the hypervisor 140 has stopped one or more of the VMs, migrated the VMs to a new hypervisor, unplug a NIC for the VMs, etc. In response to the monitoring, the management component 170 may detect that a newly available virtual function associated with the first host machine. In such as case, the management component 170 decouples VM 332 from the virtual bridge 345 by unplugging vNIC 335 from the bridge. Thereafter, the management component 170 associates or otherwise assigns the vNIC 335 of the VM 332 to the newly available virtual function of host server 110.
  • In some implementations, the VF management component 170 may issue a request to the virtualization manager 115 to “live” migrate the VM 322 from host server 110 to the second host server 380 with the available VF. Live migration herein refers to the process of moving a running virtual machine from an origin host computer system to a destination host computer system without disrupting the guest operating system and/or the applications executed by the virtual machine.
  • During the live migration, the virtualization manager 115 may copy a portion of an execution state of the virtual machine being migrated from host server 110 to destination host server 180 while the virtual machine is still running at the origin host. Upon completing the copy, the virtualization manager 115 stops virtual machine 322 at the first hypervisor 140 and re-starts the virtual machine 322 at the second hypervisor 390. Thereupon, the virtualization manager 115 assigns an available VF 355 of the second hypervisor 390 to vNIC 335 and sends a notification of an availability of vNIC 335, which indicates the availability status of the available VF 355 for accessing network 125.
  • FIG. 4 depicts a flow diagram of a method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure. In one implementation, the processing device 203 of FIG. 2 as direct by the VF management component 160 of FIG. 1 may implement method 400. The method 400 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (e.g., software executed by a general purpose computer system or a dedicated machine), or a combination of both. In alternative implementations, some or all of the method 400 may be performed by other components of a shared storage system. In some implementations, the blocks depicted in FIG. 4 can be performed simultaneously or in a different order than that depicted.
  • Method 400 begins at block 410 where a request to configure a virtual device of a virtual machine associated with a host machine with access to a specified network is received. In block 420, a determination is made that virtual functions to the specified network are unavailable on the host machine. An available virtual function associated with a logical network device is identified in block 430 over the specified network. In block 440, the virtual machine is coupled to a virtual bridge coupled to the second logical network device. In block 450, the virtual device of the virtual machine is associated with the available virtual function.
  • FIG. 5 depicts a flow diagram of another method for hot-plugging of virtual functions in accordance with one or more aspects of the disclosure. In one implementation, the processing device 203 of FIG. 2 as direct by the VF management component 160 of FIG. 1 may implement method 500. The method 500 may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (e.g., software executed by a general purpose computer system or a dedicated machine), or a combination of both. In alternative implementations, some or all of the method 500 may be performed by other components of a shared storage system. In some implementations, the blocks depicted in FIG. 5 can be performed simultaneously or in a different order than that depicted.
  • Method 500 begins at block 510 where a determination is made that virtual functions are unavailable for a virtual machine running on a first host computer system. The virtual functions are associated with a specified network. In block 520, an available virtual function for the specified network associated with is identified on a second host computer system. In block 530, the virtual machine is migrated from the first host computer system to the second host computer system. Thereupon, an instruction to associate a virtual device of the virtual machine with the available virtual function associated with the specified network is issued in block 540.
  • FIG. 6 depicts a block diagram of a computer system operating in accordance with one or more aspects of the disclosure. In various illustrative examples, computer system 600 may correspond to a processing device within system 200 of FIG. 2. The computer system may be included within a data center that supports virtualization. Virtualization within a data center results in a physical system being virtualized using virtual machines to consolidate the data center infrastructure and increase operational efficiencies. A virtual machine (VM) may be a program-based emulation of computer hardware. For example, the VM may operate based on computer architecture and functions of computer hardware resources associated with hard disks or other such memory. The VM may emulate a physical computing environment, but requests for a hard disk or memory may be managed by a virtualization layer of a host machine/host device to translate these requests to the underlying physical computing hardware resources. This type of virtualization results in multiple VMs sharing physical resources.
  • In certain implementations, computer system 600 may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems. Computer system 600 may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment. Computer system 600 may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, the term “computer” shall include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods described herein for managing secret keys in a virtualized data-center.
  • In a further aspect, the computer system 600 may include a processing device 602, a volatile memory 604 (e.g., random access memory (RAM)), a non-volatile memory 606 (e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a data storage domain 616, which may communicate with each other via a bus 608.
  • Processing device 602 may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
  • Computer system 600 may further include a network interface device 622. Computer system 600 also may include a video display unit 610 (e.g., an LCD), an alphanumeric input device 612 (e.g., a keyboard), a cursor control device 614 (e.g., a mouse), and a signal generation device 620.
  • Data storage domain 616 may include a non-transitory computer-readable storage medium 624 on which may store instructions 626 encoding any one or more of the methods or functions described herein, including instructions for implementing method 400 of FIG. 4 or method 500 of FIG. 5 for hot-plugging of virtual functions.
  • Instructions 626 may also reside, completely or partially, within volatile memory 604 and/or within processing device 602 during execution thereof by computer system 600, hence, volatile memory 604 and processing device 602 may also constitute machine-readable storage media.
  • While non-transitory computer-readable storage medium 624 is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions. The term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer that cause the computer to perform any one or more of the methods described herein. The term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • The methods, components, and features described herein may be implemented by discrete hardware components or may be integrated in the functionality of other hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, firmware modules or functional circuitry within hardware devices may implement the methods, components, and features of the disclosure. Further, the methods, components, and features may be implemented in any combination of hardware devices and computer program components, or in computer programs.
  • Unless specifically stated otherwise, terms such as “selecting,” “determining,” “adjusting,” “comparing,” “identifying,” “associating,” “monitoring,” “migrating,” “issuing,” “plugging,” “un-plugging” or the like, refer to actions and processes performed or implemented by computer systems that manipulates and transforms data represented as physical (electronic) quantities within the computer system registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. Also, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not have an ordinal meaning according to their numerical designation.
  • Examples described herein also relate to an apparatus for performing the methods described herein. This apparatus may be specially constructed for performing the methods described herein, or it may comprise a general purpose computer system selectively programmed by a computer program stored in the computer system. Such a computer program may be stored in a computer-readable tangible storage medium.
  • The methods and illustrative examples described herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used in accordance with the teachings described herein, or it may prove convenient to construct more specialized apparatus to perform methods 300 and 400 and/or each of its individual functions, routines, subroutines, or operations. Examples of the structure for a variety of these systems are set forth in the description above.
  • The above description is intended to be illustrative, and not restrictive. Although the disclosure has been described with references to specific illustrative examples and implementations, it should be recognized that the disclosure is not limited to the examples and implementations described. The scope of the disclosure should be determined with reference to the following claims, along with the full scope of equivalents to which the claims are entitled. CLAIMS

Claims (20)

What is claimed is:
1. A computer system, comprising:
a memory; and
a processing device, operatively coupled to the memory, to:
determine that virtual functions associated with a logical network for a virtual machine hosted on a first host system are unavailable on the first host system;
identify a logical network device on a second host system that is communicably accessible from the first host system;
determine that the logical network device on the second host system has a number of available virtual functions associated with the logical network;
migrate the virtual machine from the first host computer system to the second host computer system to allow the virtual machine to access the number of available virtual functions associated with the logical network on the second host system; and
associate a virtual device of the virtual machine with the number of available virtual functions.
2. The computer system of claim 1, wherein to identify the logical network device, the processing device is further to:
identify a hypervisor associated with the logical network that is capable of supporting the virtual functions; and
determine availability of at least one virtual function associated with the identified hypervisor.
3. The computer system of claim 1, wherein the processing device is further to:
notify a client application of availability of the virtual device associated with the virtual machine.
4. The computer system of claim 1, wherein the processing device is further to:
couple the virtual device of the virtual machine to a virtual bridge;
identify an available virtual function associated with a second logical network device on the logical network; and
couple the virtual bridge to the second logical network device.
5. The computer system of claim 1, wherein the processing device is further to:
associate the virtual device with the number of available virtual functions without stopping the virtual machine.
6. The computer system of claim 1, wherein to migrate the virtual machine from the first host computer system to the second host computer system, the processing device is further to:
copy at least a portion of an execution state of the virtual machine while the virtual machine is still running on the first host computer system;
stop the virtual machine on the first host computer system; and
restart the virtual machine on the second host computer system using the portion of the execution state.
7. The computer system of claim 1, wherein the processing device is further to:
scan the logical network to discover the second host system having the number of available virtual functions.
8. A method comprising:
determining that virtual functions associated with a logical network for a virtual machine hosted on a first host system are unavailable on the first host system;
identifying a logical network device on a second host system that is communicably accessible from the first host system;
determining that the logical network device on the second host system has a number of available virtual functions associated with the logical network;
migrating the virtual machine from the first host computer system to the second host computer system to allow the virtual machine to access the number of available virtual functions associated with the logical network on the second host system; and
associating a virtual device of the virtual machine with the number of available virtual functions.
9. The method of claim 8, wherein identifying the logical network device comprises:
identifying a hypervisor associated with the logical network that is capable of supporting the virtual functions; and
determining availability of at least one virtual function associated with the identified hypervisor.
10. The method of claim 8, further comprising:
notifying a client application of availability of the virtual device associated with the virtual machine.
11. The method of claim 8, further comprising:
coupling the virtual device of the virtual machine to a virtual bridge;
identifying an available virtual function associated with a second logical network device on the logical network; and
coupling the virtual bridge to the second logical network device.
12. The method of claim 8, further comprising:
associating the virtual device with the number of available virtual functions without stopping the virtual machine.
13. The method of claim 8, wherein migrating the virtual machine from the first host computer system to the second host computer system comprises:
copying at least a portion of an execution state of the virtual machine while the virtual machine is still running on the first host computer system;
stopping the virtual machine on the first host computer system; and
restarting the virtual machine on the second host computer system using the portion of the execution state.
14. The method of claim 8, further comprising:
scanning the logical network to discover the second host system having the number of available virtual functions.
15. A non-transitory computer readable storage medium, having instructions stored therein, which when executed by a processing device of a management system, cause the processing device to:
determine that virtual functions associated with a logical network for a virtual machine hosted on a first host system are unavailable on the first host system;
identify a logical network device on a second host system that is communicably accessible from the first host system;
determine that the logical network device on the second host system has a number of available virtual functions associated with the logical network;
migrate the virtual machine from the first host computer system to the second host computer system to allow the virtual machine to access the number of available virtual functions associated with the logical network on the second host system; and
associate a virtual device of the virtual machine with the number of available virtual functions.
16. The non-transitory computer readable storage medium of claim 15, wherein to identify the logical network device, the instructions cause the processing device to:
identify a hypervisor associated with the logical network that is capable of supporting the virtual functions; and
determine availability of at least one virtual function associated with the identified hypervisor.
17. The non-transitory computer readable storage medium of claim 15, wherein the instructions further cause the processing device to:
notify a client application of availability of the virtual device associated with the virtual machine.
18. The non-transitory computer readable storage medium of claim 15, wherein the instructions further cause the processing device to:
couple the virtual device of the virtual machine to a virtual bridge;
identify an available virtual function associated with a second logical network device on the logical network; and
couple the virtual bridge to the second logical network device.
19. The non-transitory computer readable storage medium of claim 15, wherein the instructions further cause the processing device to:
associate the virtual device with the number of available virtual functions without stopping the virtual machine.
20. The non-transitory computer readable storage medium of claim 15, wherein to migrate the virtual machine from the first host computer system to the second host computer system, the instructions cause the processing device to:
copy at least a portion of an execution state of the virtual machine while the virtual machine is still running on the first host computer system;
stop the virtual machine on the first host computer system; and
restart the virtual machine on the second host computer system using the portion of the execution state.
US16/579,519 2016-08-17 2019-09-23 Hot-plugging of virtual functions in a virtualized environment Active US11061712B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/579,519 US11061712B2 (en) 2016-08-17 2019-09-23 Hot-plugging of virtual functions in a virtualized environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/239,172 US10423437B2 (en) 2016-08-17 2016-08-17 Hot-plugging of virtual functions in a virtualized environment
US16/579,519 US11061712B2 (en) 2016-08-17 2019-09-23 Hot-plugging of virtual functions in a virtualized environment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/239,172 Continuation US10423437B2 (en) 2016-08-17 2016-08-17 Hot-plugging of virtual functions in a virtualized environment

Publications (2)

Publication Number Publication Date
US20200019429A1 true US20200019429A1 (en) 2020-01-16
US11061712B2 US11061712B2 (en) 2021-07-13

Family

ID=61191685

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/239,172 Active 2037-05-13 US10423437B2 (en) 2016-08-17 2016-08-17 Hot-plugging of virtual functions in a virtualized environment
US16/579,519 Active US11061712B2 (en) 2016-08-17 2019-09-23 Hot-plugging of virtual functions in a virtualized environment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/239,172 Active 2037-05-13 US10423437B2 (en) 2016-08-17 2016-08-17 Hot-plugging of virtual functions in a virtualized environment

Country Status (1)

Country Link
US (2) US10423437B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11409619B2 (en) 2020-04-29 2022-08-09 The Research Foundation For The State University Of New York Recovering a virtual machine after failure of post-copy live migration

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3376376B1 (en) 2017-01-20 2020-03-11 Huawei Technologies Co., Ltd. Method, network card, host device and computer system for forwarding data packages
MX2019011257A (en) 2017-03-28 2019-11-01 Cloudjumper Corp Methods and systems for providing wake-on-demand access to session servers.
US11055125B2 (en) 2017-11-10 2021-07-06 Microsoft Technology Licensing, Llc Virtual machine client-side virtual network change
WO2019124259A1 (en) * 2017-12-20 2019-06-27 日本電気株式会社 Configuration management device, configuration management system, configuration management method, and configuration management program
US11329955B2 (en) * 2018-01-24 2022-05-10 Vmware, Inc. Remote session based micro-segmentation
US10698709B2 (en) 2018-03-07 2020-06-30 Microsoft Technology Licensing, Llc Prediction of virtual machine demand
GB2573283B (en) * 2018-04-26 2020-04-29 Metaswitch Networks Ltd Improvements relating to network functions virtualization
US11822946B2 (en) * 2018-06-28 2023-11-21 Cable Television Laboratories, Inc. Systems and methods for secure network management of virtual network functions
US11563677B1 (en) 2018-06-28 2023-01-24 Cable Television Laboratories, Inc. Systems and methods for secure network management of virtual network function
US11372580B2 (en) * 2018-08-07 2022-06-28 Marvell Asia Pte, Ltd. Enabling virtual functions on storage media
US11573870B2 (en) 2018-08-22 2023-02-07 Intel Corporation Zero copy host interface in a scalable input/output (I/O) virtualization (S-IOV) architecture
US10831532B2 (en) * 2018-10-19 2020-11-10 International Business Machines Corporation Updating a nested virtualization manager using live migration of virtual machines
US10819589B2 (en) * 2018-10-24 2020-10-27 Cognizant Technology Solutions India Pvt. Ltd. System and a method for optimized server-less service virtualization
US10880370B2 (en) 2018-11-27 2020-12-29 At&T Intellectual Property I, L.P. Virtual network manager system
US11010084B2 (en) * 2019-05-03 2021-05-18 Dell Products L.P. Virtual machine migration system
CN113535370A (en) * 2020-04-09 2021-10-22 深圳致星科技有限公司 Method and equipment for realizing multiple RDMA network card virtualization of load balancing
US20210382737A1 (en) * 2020-06-03 2021-12-09 Baidu Usa Llc Data protection with dynamic resource isolation for data processing accelerators
US11822964B2 (en) 2020-06-03 2023-11-21 Baidu Usa Llc Data protection with static resource partition for data processing accelerators
US20220052904A1 (en) * 2020-08-11 2022-02-17 F5 Networks, Inc. Managing network ports in a virtualization environment
US20220197679A1 (en) * 2020-12-18 2022-06-23 Advanced Micro Devices (Shanghai) Co., Ltd. Modifying device status in single virtual function mode
CN114662162B (en) * 2022-05-25 2022-09-20 广州万协通信息技术有限公司 Multi-algorithm-core high-performance SR-IOV encryption and decryption system and method for realizing dynamic VF distribution

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
PT3130396T (en) * 2009-03-27 2021-05-12 Bend Res Inc Spray-drying process
US9210140B2 (en) * 2009-08-19 2015-12-08 Solarflare Communications, Inc. Remote functionality selection
FR2954847B1 (en) * 2009-12-30 2012-10-26 Thales Sa SYSTEM AND METHOD FOR CENTRALIZED NAVIGATION INFORMATION MANAGEMENT
EP2609539B1 (en) 2010-08-27 2016-10-05 Hewlett-Packard Development Company, L.P. Virtual hotplug techniques
US8683478B2 (en) * 2010-12-21 2014-03-25 International Business Machines Corporation Best fit mapping of self-virtualizing input/output device virtual functions for mobile logical partitions
US8418166B2 (en) 2011-01-11 2013-04-09 International Business Machines Corporation Transparent update of adapter firmware for self-virtualizing input/output device
US8533713B2 (en) 2011-03-29 2013-09-10 Intel Corporation Efficent migration of virtual functions to enable high availability and resource rebalance
US9218195B2 (en) * 2011-05-17 2015-12-22 International Business Machines Corporation Vendor-independent resource configuration interface for self-virtualizing input/output device
US8634393B2 (en) * 2011-08-05 2014-01-21 Cisco Technology, Inc. Channel scanning in a network having one or more access points
US8954704B2 (en) 2011-08-12 2015-02-10 International Business Machines Corporation Dynamic network adapter memory resizing and bounding for virtual function translation entry storage
CN104272288B (en) 2012-06-08 2018-01-30 英特尔公司 For realizing the method and system of virtual machine VM Platform communication loopbacks
US9990221B2 (en) * 2013-03-15 2018-06-05 Oracle International Corporation System and method for providing an infiniband SR-IOV vSwitch architecture for a high performance cloud computing environment
US9081599B2 (en) * 2013-05-28 2015-07-14 Red Hat Israel, Ltd. Adjusting transfer rate of virtual machine state in virtual machine migration
WO2015130837A1 (en) 2014-02-25 2015-09-03 Dynavisor, Inc. Dynamic information virtualization
US9294567B2 (en) 2014-05-02 2016-03-22 Cavium, Inc. Systems and methods for enabling access to extensible storage devices over a network as local storage via NVME controller
US9723008B2 (en) 2014-09-09 2017-08-01 Oracle International Corporation System and method for providing an integrated firewall for secure network communication in a multi-tenant environment
US9594649B2 (en) * 2014-10-13 2017-03-14 At&T Intellectual Property I, L.P. Network virtualization policy management system
KR102364712B1 (en) * 2015-04-03 2022-02-18 한국전자통신연구원 A system and method for integrated service orchestration in distributed cloud envronment
US10387180B2 (en) * 2015-07-07 2019-08-20 International Business Machines Corporation Hypervisor controlled redundancy for I/O paths using virtualized I/O adapters
US9996382B2 (en) * 2016-04-01 2018-06-12 International Business Machines Corporation Implementing dynamic cost calculation for SRIOV virtual function (VF) in cloud environments

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11409619B2 (en) 2020-04-29 2022-08-09 The Research Foundation For The State University Of New York Recovering a virtual machine after failure of post-copy live migration
US11983079B2 (en) 2020-04-29 2024-05-14 The Research Foundation For The State University Of New York Recovering a virtual machine after failure of post-copy live migration

Also Published As

Publication number Publication date
US10423437B2 (en) 2019-09-24
US11061712B2 (en) 2021-07-13
US20180052701A1 (en) 2018-02-22

Similar Documents

Publication Publication Date Title
US11061712B2 (en) Hot-plugging of virtual functions in a virtualized environment
US11716383B2 (en) Accessing multiple external storages to present an emulated local storage through a NIC
US10778521B2 (en) Reconfiguring a server including a reconfigurable adapter device
US11636053B2 (en) Emulating a local storage by accessing an external storage through a shared port of a NIC
US11068355B2 (en) Systems and methods for maintaining virtual component checkpoints on an offload device
JP6771650B2 (en) Methods, devices, and systems for virtual machines to access physical servers in cloud computing systems
US8533713B2 (en) Efficent migration of virtual functions to enable high availability and resource rebalance
US9674103B2 (en) Management of addresses in virtual machines
US8830870B2 (en) Network adapter hardware state migration discovery in a stateful environment
US9031081B2 (en) Method and system for switching in a virtualized platform
JP5222651B2 (en) Virtual computer system and control method of virtual computer system
JP4972670B2 (en) Virtual computer system, access control method thereof, and communication apparatus
EP4127892A1 (en) Distributed storage services supported by a nic
US10367688B2 (en) Discovering changes of network interface controller names
US10560535B2 (en) System and method for live migration of remote desktop session host sessions without data loss
US10795727B2 (en) Flexible automated provisioning of single-root input/output virtualization (SR-IOV) devices
US20140289198A1 (en) Tracking and maintaining affinity of machines migrating across hosts or clouds
US11805102B2 (en) Remote management of software on private networks
US11360824B2 (en) Customized partitioning of compute instances
WO2017046830A1 (en) Method and system for managing instances in computer system including virtualized computing environment
US10754676B2 (en) Sharing ownership of an input/output device using a device driver partition
US20230017676A1 (en) Virtual accelerators in a virtualized computing system
US20170206091A1 (en) Sharing ownership of an input/output device with an existing partition
Partitions ESXi Install

Legal Events

Date Code Title Description
AS Assignment

Owner name: RED HAT ISRAEL, LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KAPLAN, ALONA;KOLESNIK, MICHAEL;REEL/FRAME:050466/0102

Effective date: 20160817

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE