WO2014114072A1 - 虚拟化平台下i/o通道的调整方法和调整装置 - Google Patents

虚拟化平台下i/o通道的调整方法和调整装置 Download PDF

Info

Publication number
WO2014114072A1
WO2014114072A1 PCT/CN2013/080837 CN2013080837W WO2014114072A1 WO 2014114072 A1 WO2014114072 A1 WO 2014114072A1 CN 2013080837 W CN2013080837 W CN 2013080837W WO 2014114072 A1 WO2014114072 A1 WO 2014114072A1
Authority
WO
WIPO (PCT)
Prior art keywords
vms
end devices
throughput
host
average
Prior art date
Application number
PCT/CN2013/080837
Other languages
English (en)
French (fr)
Inventor
张洁
金鑫
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020137034172A priority Critical patent/KR101559097B1/ko
Priority to EP13802520.0A priority patent/EP2772854B1/en
Priority to JP2014557993A priority patent/JP5923627B2/ja
Priority to RU2013158942/08A priority patent/RU2573733C1/ru
Priority to AU2013273688A priority patent/AU2013273688B2/en
Priority to US14/108,804 priority patent/US8819685B2/en
Publication of WO2014114072A1 publication Critical patent/WO2014114072A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Definitions

  • the present invention relates to the field of virtualization technologies, and in particular, to an adjustment method and an adjustment apparatus for an I/O channel under a virtualization platform. Background technique
  • Virtualization is the abstraction and transformation of computer physical resources such as servers, networks, memory and storage.
  • the new virtual part is not limited by the way of existing resources, geographical or physical configuration.
  • the physical host HOST runs multiple virtual machines (Virtual Machines, referred to as VMs).
  • VMs Virtual Machines
  • HOST manages all physical hardware devices and resources, and virtualizes one exclusive device into multiple virtual devices for multiple user threads to use simultaneously.
  • the devices that users can see are virtual devices, and the physical hardware devices are transparent to users.
  • the VM does not directly access the hardware device.
  • HOST provides the VM with a data path connecting the hardware devices, that is, an I/O channel.
  • the channel includes a data channel between the Front Device of the VM and the Back Device of the VM, and a data channel between the back device of the VM and the Native Device of the HOST; wherein, the front end of the VM
  • the device is the device seen in the virtual machine, which is actually the device that the HOST simulates for the VM.
  • the backend device of the VM is the software emulation device that is connected to the front-end device of the VM in the HOST operating system; the local device of the HOST is the HOST device. Physical device.
  • FIG. 1 illustrates a simple multi-I/O channel technology in a virtualization platform in the prior art.
  • two virtual machines VM1 and VM2 are taken as an example.
  • /O channel (takes two I/O channels in Figure 1 as an example).
  • the data processing module is a bridge between the front-end device and the back-end device of the VM, used for data copying, data filtering, or other data processing services, including Multiple worker threads (take two worker threads as an example in Figure 1), the number of worker threads and the front-end and back-end devices of the VM
  • the number of I/O channels is the same, and each I/O channel corresponds to one working thread.
  • the back channel device of the VM and the bridge bridge and the bridge bridge and the local device Native Device are single channels, VM The backend device implements data transmission with the local device Native Device through the single channel.
  • the inventor has found that the above prior art has at least the following technical problems:
  • the number of I/O channels between the front-end device and the back-end device of the VM is determined when the VM is created, and the number of I/O channels in the entire life cycle of the VM is determined. Cannot be changed, so the channel resources occupied by the I/O channel between the front-end device and the back-end device of the VM cannot be changed.
  • the I/O throughput between the front-end device and the back-end device of the VM changes, Adjusting the I/O channel resources, when the I/O throughput drops, the idle I/O channel resources cannot be released, which wastes the I/O channel resources.
  • the I/O throughput increases, the I/O channel resources cannot be increased. The data transmission capability of the I/O channel cannot be improved and the system performance is degraded. Summary of the invention
  • Embodiments of the present invention provide an adjustment method and an HOST adjustment apparatus for an I/O channel under a virtualization platform, so as to dynamically adjust allocation of I/O channel resources between front-end devices and back-end devices of multiple VMs, thereby improving system performance. .
  • the present invention provides a method for adjusting an I/O channel under a virtualization platform, including: a host HOST statistics an average I/O throughput of a plurality of virtual machine VMs running on the HOST at a current time; When the average I/O throughput at the current moment is greater than the first threshold, the HOST adds a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs, so that after the worker thread is added The average I/O throughput of the plurality of VMs is less than a first threshold; or, when the average I/O throughput at the current moment is less than a second threshold, the HOST is at a front end device of the plurality of VMs Reducing a worker thread for processing the VM with the backend device such that the average I/O throughput of the plurality of VMs after the worker thread is reduced is greater than a second threshold; wherein the first threshold is greater than a second threshold; the HOST adjust
  • the HOST is increased between the front end device and the back end device of the multiple VMs.
  • the HOST compares the increase in CPU utilization and the increase in I/O throughput brought by the worker threads for processing VMs between the front-end devices and the back-end devices of the plurality of VMs. If the increase in the I/O throughput is greater than the increase in CPU utilization, performing the HOST adds a step of processing a worker thread of the VM between the front end device and the back end device of the plurality of VMs.
  • the HOST is reduced between the front end device and the back end device of the multiple VMs.
  • the method further includes:
  • the HOST judgment is reduced between the front end device and the back end device of the plurality of VMs for processing
  • the HOST is separately adjusted according to the added or decreased working thread for processing the VM.
  • the HOST When the number of worker threads for processing the VM after the increase or decrease is smaller than the number of VMs running on the HOST, the HOST respectively corresponds each of the worker threads to one queue and each of the front-end devices of each VM.
  • the exclusive worker thread and the shared worker thread are included.
  • the HOST is based on the increased or decreased work for processing the VM.
  • a thread respectively adjusting a correspondence between a queue in the front end device of the plurality of VMs and a worker thread for processing the VM, and a queue in the back end device of the plurality of VMs and a worker thread for processing the VM
  • the method further includes:
  • the HOST adjusts a correspondence between a queue in a backend device of the plurality of VMs and a queue in a local device Native Device in the HOST, so as to be in a backend device of the plurality of VMs and the Native Device.
  • a plurality of data transmission channels are formed.
  • the present invention provides an apparatus for adjusting an I/O channel under a virtualization platform, HOST, including:
  • a statistic module configured to count the average I/O throughput of the current time of the plurality of virtual machine VMs running on the host HOST;
  • the processing module is connected to the statistic module, and is configured to be used at the current time of the statistics module
  • a worker thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that the plurality of VMs after the worker thread are added
  • the average I/O throughput is less than the first threshold; or, when the average I/O throughput at the current time counted by the statistics module is less than the second threshold, the front end device and the back of the multiple VMs Reducing a working thread for processing the VM between the end devices, so that the average I/O throughput of the plurality of VMs after the working thread is reduced is greater than a second threshold; wherein the first threshold is greater than the first
  • the first adjustment module is connected to the processing module, and is configured to adjust a queue in the front
  • the adjusting apparatus further includes: a determining module, configured to: when an average I/O throughput of the current time counted by the statistics module is greater than a first threshold, Comparing the increase in CPU utilization and the increase in I/O throughput brought about by the worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs; a processing module, configured to: if the increase in the I/O throughput is greater than an increase in CPU utilization, add a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs, so as to increase The average I/O throughput of the plurality of VMs after the worker thread is less than the first threshold.
  • the adjusting apparatus further includes: a determining module, where an average I/O throughput is less than the current time at the current time of the statistics module Determining whether a reduction in CPU utilization caused by a worker thread for processing a VM between the front-end device and the back-end device of the plurality of VMs causes a failure to respond to throughput of the plurality of VMs
  • the processing module is further configured to reduce a decrease in CPU utilization caused by a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs without causing failure to respond to the The throughput of the VMs, reducing a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs, such that the average I/O throughput of the plurality of VMs after the worker thread is reduced is greater than The second threshold.
  • the first adjustment module is specifically configured to:
  • each of the worker threads corresponds to one queue and each VM of each VM's front-end device. a queue in the backend device; or, if the number of worker threads for processing the VM after the increase or decrease is greater than or equal to the number of VMs running on the HOST, the exclusive worker thread corresponds to a VM front-end device and a queue in the end device, and a queue in the front end device and the back end device corresponding to the at least two VMs that share the worker thread are not corresponding to the exclusive worker thread, and the worker thread for processing the VM includes the exclusive worker thread And the shared worker thread.
  • the adjusting apparatus further includes:
  • a second adjustment module configured to adjust a correspondence between a queue in the backend device of the plurality of VMs and a queue in the local device Native Device in the HOST, so as to facilitate the backend device and the A plurality of data transmission channels are formed between the Native Devices.
  • the present invention provides a host HOST, where the HOST includes: a local device Native Device, a front end device and a back end device of a plurality of virtual machine VMs running on the Native Device, And a data processing module between the front end device and the back end device of the plurality of VMs, wherein:
  • the data processing module is used to:
  • the data processing module is further configured to: add a work thread for processing the VM between the front end device and the back end device of the multiple VMs The increase in CPU utilization and the increase in I/O throughput brought about; if the increase in I/O throughput is greater than the increase in CPU utilization, the front-end devices and the back of the multiple VMs A worker thread for processing the VM is added between the end devices such that the average I/O throughput of the plurality of VMs after the worker thread is increased is less than the first threshold.
  • the data processing module is further configured to: determine, between the front end device and the back end device of the plurality of VMs, reduce a work thread for processing the VM Whether the decrease in CPU utilization is caused to fail to respond to the throughput of the plurality of VMs; if the CPU utilization of the worker thread for processing the VM is reduced between the front-end device and the back-end device of the plurality of VMs The reduction in rate does not result in an inability to respond to the throughput of the plurality of VMs, reducing a worker thread for processing the VM between the front end device and the back end device of the plurality of VMs, such that after the worker thread is reduced
  • the average I/O throughput of multiple VMs is greater than the second threshold.
  • determining whether to increase or decrease a worker thread for processing a VM between a front end device and a back end device of the plurality of VMs according to an average I/O throughput of the plurality of VMs at the current time When the average I/O throughput of multiple VMs is greater than the first threshold at the current time, the working thread for processing the VM is increased, that is, the I/O channel resources are increased, and the data transmission capability of the I/O channel is improved; When the average I/O throughput of multiple VMs is less than the second threshold, the working threads for processing VMs are reduced, that is, the I/O channel resources are reduced, and the I/O channel resources are wasted.
  • the HOST forms a plurality of data between the backend device of the plurality of VMs and the native device of the local device by adjusting the correspondence between the queues in the backend devices of the plurality of VMs and the queues in the local device Native Device in the host HOST.
  • the transmission channel realizes multiple I/O channels between the front end device of the plurality of VMs and the local device of the local device in the HOST, and the plurality of VMs are improved.
  • FIG. 1 is a structural diagram of a simple multi-I/O channel technology under a virtualization platform in the prior art
  • FIG. 2 is a flowchart of a method for adjusting an I/O channel under a virtualization platform according to an embodiment of the present invention
  • An architecture diagram in which the I/O working mode between the front-end device and the back-end device of multiple VMs in the virtualization platform provided by the present invention is a shared mode;
  • FIG. 4 is an architecture diagram of an I/O working mode of a hybrid mode between a front end device and a back end device of multiple VMs in a virtualization platform provided by the present invention
  • FIG. 5 is a schematic structural diagram of an apparatus for adjusting an I/O channel under a virtualization platform according to an embodiment of the present invention
  • FIG. 6 is a schematic structural diagram of an apparatus for adjusting an I/O channel under a virtualization platform according to another embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a host HOST according to an embodiment of the present invention. detailed description
  • Host HOST used as management layer to manage and allocate hardware resources; for virtual machine resources, such as providing virtual processors (such as VCPU), virtual memory, virtual disks, virtual network cards, etc. Wait.
  • the virtual disk can correspond to a file of HOST or a logical block device.
  • the virtual machine runs on the virtual hardware platform that HOST prepares for it, and one or more virtual machines are running on the HOST.
  • Virtual Machine VM Virtual machine software can simulate one or more virtual computers on a single physical computer. These virtual machines work like real computers, and operating systems and applications can be installed on virtual machines. Virtual machines also have access to network resources. For an application running in a virtual machine, the virtual machine is like working on a real computer.
  • a data processing module is introduced between the front end device of the VM and the back end device, and the data processing module is configured to process data transmission between the front end device and the back end device of the VM, and is processed by the worker thread.
  • the data processing module is generally implemented by software, that is, by a processor reading a special function software code instruction.
  • the Native Device may include various hardware.
  • the Native Device of a computing node may include a processor (such as a CPU) and a memory, and may also include a network card, a memory, and the like, and a high speed/low speed input/output (I/O, Input/ Output ) device.
  • Bridge Bridge Bridge A network device or software between the backend device of the VM and the local device Native Device of the host HOST, which implements the network interconnection between the backend device of the VM and the local device of the host HOST.
  • the data frame is forwarded. Specifically include:
  • the host HOST counts the average I/O throughput of the plurality of virtual machine VMs running on the HOST at the current time.
  • the HOST may first calculate the total I/O throughput of the plurality of VMs running on the HOST, and divide the number of virtual machines running on the HOST to obtain the current VMs. Average I/O throughput.
  • a worker thread for processing the VM is added between the front end device and the back end device of the VM; or, when the average I/O throughput at the current moment is less than the second threshold, the HOST is at the front end device of the plurality of VMs and Work threads for processing VMs are reduced between end devices.
  • the working thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that the average I/O throughput of the plurality of VMs after the working thread is increased is greater than the first threshold;
  • a worker thread for processing the VM is reduced between the front end device and the back end device of the plurality of VMs such that the average I/O throughput of the plurality of VMs after the worker thread is reduced is greater than the second threshold.
  • the first threshold is greater than a second threshold, the first threshold is used to indicate an upper limit of the average I/O throughput of the plurality of VMs, and the second threshold is used to indicate an average I/O throughput of the plurality of VMs.
  • the lower limit of the quantity, the first threshold reflects the upper limit of the maximum I/O throughput that a single VM can withstand.
  • the second threshold reflects the lower limit of the I/O throughput that a single VM should bear at the minimum.
  • the HOST includes:
  • Changing HOST will increase the increase in CPU utilization and the increase in I/O throughput brought by the worker threads that process VMs between the front-end devices and back-end devices of the multiple VMs;
  • the increase in I/O throughput is greater than the increase in CPU utilization, and the HOST is executed to add a worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs.
  • the increase in CPU utilization refers to increasing the CPU utilization of the worker thread for processing the VM relative to the worker thread that does not increase the processing VM, and can increase the CPU utilization and/or
  • the increase in CPU utilization is expressed;
  • the resulting increase in I/O throughput refers to an increase in the I/O throughput of the worker thread used to process the VM relative to the increased processing of the worker thread that is not used to process the VM.
  • the present invention does not limit how to compare the increase of CPU utilization and the increase of I/O throughput.
  • two methods are given exemplarily, if the increase of I/O throughput is greater than that of the CPU.
  • the increase in utilization, or the growth rate of I/O throughput is greater than the growth rate of CPU utilization, then it is determined to increase the worker thread used to process the VM.
  • the HOST further includes:
  • the multiple VMs may also be prioritized, so that the high-priority VM maintains exclusive use of the working thread, and enjoys a proprietary I/O channel, regardless of the overall I/O load of the host HOST, high priority.
  • Level 1 VM exclusive I/O channel resources are not affected; For VMs with the same priority level, they are processed according to the above method of increasing or decreasing worker threads.
  • the I/O channel resource includes a worker thread for processing the VM, and a queue in the front-end device and the back-end device of the VM.
  • the HOST adjusts a correspondence between a queue in the front-end device and the back-end device of the plurality of VMs and a worker thread for processing the VM according to the added or decreased working thread for processing the VM.
  • the foregoing correspondence includes: a correspondence between a queue in the front end device of the plurality of VMs and a worker thread for processing the VM, and a queue in the backend device of the plurality of VMs and a worker thread for processing the VM.
  • the HOST adjusts the correspondence respectively to form a plurality of data transmission channels between the front end device of the plurality of VMs and the back end devices of the plurality of VMs.
  • the HOST adjusts, according to the added or decreased working thread for processing the VM, a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM, and a plurality of VMs.
  • the HOST will work each time after the increase or decrease of the number of worker threads for processing the VM is smaller than the number of VMs running on the HOST
  • the thread corresponds to one queue in each front-end device of each VM and one queue in each back-end device of each VM; or, the number of working threads for processing VM after increasing or decreasing is greater than or equal to that of running VM on HOST
  • the HOST will exclusively occupy a queue of the front-end device and the back-end device of one VM of the working thread, and a queue of the front-end device and the back-end device corresponding to the shared worker thread corresponding to at least two VMs that are not exclusive working threads
  • the worker threads for processing the VM include an exclusive worker thread and a shared worker thread. It should be noted that the above two adjustment modes respectively correspond to the sharing mode and the hybrid mode, and the descriptions shown in FIG. 3 and FIG. 4 can be specifically
  • the method for adjusting the I/O channel under the virtualization platform further includes:
  • the HOST adjusts a correspondence between a queue in the backend device of the multiple VMs and a queue in the local device Native Device in the HOST, so as to form multiple data transmissions between the backend device of the multiple VMs and the Native Device. aisle.
  • the Native Device may have multiple queues, and data in the backend device of the VM may perform queue selection when entering the Native Device to implement data transmission through different queues, and the technology may be Hardware driver implementation in Native Device.
  • the Native Device can also select more of the backend devices of the VM when transmitting data to the backend device of the VM through the bridge bridge. Queues to implement multiple data transmission channels between the VM's backend device and the Native Device. Therefore, the correspondence between the columns is actually how the Native Device selects the queue in the backend device of the VM when it sends data to the VM's backend device through the bridge bridge.
  • the queue in the backend device of the Native Device can be used to maintain the consistency of the channel for transmitting data between the queue in the backend device of the VM and the queue in the Native Device.
  • the Native Device re-selects different queues in the VM's backend device based on the attributes of the data stream (such as from the same source or other).
  • the present invention implements the corresponding relationship between the queue in the backend device of the plurality of VMs and the queue in the local device Native Device in the HOST.
  • a plurality of data transmission channels are formed between the backend device of the plurality of VMs and the local device Native Device by adjusting a correspondence between the backend devices of the plurality of VMs and the local device Native Device of the HOST.
  • a plurality of I/O channels between the front end device of the plurality of VMs and the local device Native Device of the HOST are implemented, and the data transmission capability between the plurality of VMs and the native device of the HOST local device can be improved.
  • the HOST determines whether to increase or decrease the working thread for processing the VM between the front-end device and the back-end device of the plurality of VMs according to the average I/O throughput of the plurality of VMs at the current time.
  • the working thread for processing the VM is increased, that is, the I/O channel resources are increased, and the data transmission capability of the I/O channel is improved;
  • the average I/O throughput is less than the second threshold, reduce the worker thread used to process the VM, ie Reduce I/O channel resources and avoid wasting I/O channel resources.
  • the present invention sets two I/O working modes between the front end device and the back end device of the plurality of VMs, including a sharing mode and a hybrid mode, and the two The I/O working mode can be switched to each other, and when a certain condition is met, one working mode can be switched to another working mode.
  • the HOST adjusts the I/O working mode between the front end device and the back end device of the multiple VMs to the sharing mode, that is,
  • the worker thread on the data processing module captures the shared mode to process the data of the queues of the front end device and the back end device of the plurality of VMs.
  • the working threads on the data processing module respectively correspond to the front end of each VM running on the HOST.
  • FIG. 3 is a schematic diagram of the I/O working mode of the VM between the front-end device and the back-end device in a shared mode.
  • the VMs running on the host HOST are VM1, VM2, and VM3, respectively, and work on the data processing module.
  • the threads are respectively worker thread 1 and worker thread 2, wherein worker thread 1 processes queue 1 in the front-end device and the back-end device in VM1, VM2, and VM3, respectively, and worker thread 2 processes the front-end device in VM1, VM2, and VM3, respectively. Queue 2 in the end device.
  • the HOST adjusts the I/O working mode between the front-end device and the back-end device of the plurality of VMs to be mixed.
  • the mode that is, the worker thread on the data processing module captures the mixed mode to process the data of the queues in the front-end device and the back-end device of the VM.
  • the working threads on the data processing module can be divided into exclusive working threads and shared working threads.
  • the data of the queue processed by the thread are not exclusively operated.
  • FIG. 4 is a schematic diagram of the I/O working mode of the VM between the front-end device and the back-end device in a mixed mode.
  • the VMs running on the host HOST are VM1, VM2, and VM3, respectively, and work on the data processing module.
  • the data of queue 2 in the front-end device and the back-end device of each of VMs VM1, VM2, and VM3 is shared.
  • FIG. 4 only illustrates the case where the shared worker thread processes data of one queue of the front-end device and the back-end device of each of the at least two VMs, and the shared worker thread can also process data of multiple queues in other situations, and the present invention compares No restrictions.
  • the host HOST in addition to forming a plurality of I/O channels between the front end device and the back end device of the plurality of VMs, the host HOST also adjusts the queues in the backend devices of the plurality of VMs.
  • a plurality of data transmission channels are formed between the backend device of the plurality of VMs and the local device Native Device, and the front end device of the plurality of VMs and the HOST are implemented.
  • a plurality of I/O channels between the native devices of the local device improve the data transmission capability between the front-end devices of the plurality of VMs and the native device of the local device in the HOST.
  • the adjusting device 500 specifically includes:
  • the statistics module 501 is configured to count the average I/O throughput of the plurality of virtual machine VMs running on the host HOST at the current moment.
  • the processing module 502 is connected to the statistic module 5001, and is configured to increase the average I/O throughput of the plurality of VMs between the front end device and the back end device when the average I/O throughput at the current time counted by the statistic module 501 is greater than the first threshold.
  • the working thread for processing the VM is reduced between the front end device and the back end device of the plurality of VMs, so that the average I/O throughput of the plurality of VMs after the working thread is reduced is greater than Two thresholds; wherein the first threshold is greater than the second threshold.
  • the first adjustment module 503 is connected to the processing module 502, and is configured to adjust a queue in the front end device of the plurality of VMs and work for processing the VM according to the working thread for processing the VM according to the processing module 502. Corresponding relationship between the thread, and a correspondence between the queue in the backend device of the plurality of VMs and the worker thread for processing the VM, so as to facilitate the front end device and the plurality of VMs A plurality of data transmission channels are formed between the backend devices of the VM.
  • the adjusting device 500 further includes:
  • the determining module 504 when the average I/O throughput of the current time counted by the statistics module 501 is greater than the first threshold, adding work for processing the VM between the front end device and the back end device of the plurality of VMs
  • the increase in CPU utilization caused by the thread is compared with the increase in I/O throughput brought in;
  • the processing module 502 is further configured to determine, at the determining module 504, that if the increase in the I/O throughput is greater than the CPU utilization a rate increase, a worker thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that an average I/O throughput of the plurality of VMs after the worker thread is increased is less than the first threshold .
  • the adjusting device 500 further includes:
  • the determining module 504 is configured to: when the average I/O throughput d of the current time counted by the statistics module 501 is greater than the second threshold, determine that the VM is processed between the front-end device and the back-end device of the plurality of VMs Whether the reduction in CPU utilization caused by the worker thread causes the throughput of the multiple VMs to be unresponsive;
  • the processing module 502 is further configured to: at the determining module 504, determine that if the reduction of the CPU utilization caused by the worker thread for processing the VM between the front-end device and the back-end device of the plurality of VMs is not caused to fail to respond to the The throughput of the plurality of VMs is reduced between the front-end device and the back-end device of the plurality of VMs, so that the average I/O throughput of the plurality of VMs after the worker threads is reduced is greater than Two values.
  • first adjustment module 503 is specifically configured to:
  • each worker thread corresponds to one queue of each front-end device of each VM and the back-end device of each VM. Or one queue; or, when the number of worker threads for processing VMs increased or decreased is greater than or equal to the number of VMs running on the HOST, the exclusive worker thread corresponds to one of the front-end device and the back-end device of one VM.
  • the queue, and the front-end device and the back-end device corresponding to the at least two VMs that share the worker thread are not queued by the exclusive worker thread, wherein the worker threads for processing the VM include an exclusive worker thread and a shared worker thread.
  • the adjusting apparatus 500 further includes a second adjusting module 505, configured to adjust a correspondence between a queue in the backend device of the plurality of VMs and a queue in the local device Native Device in the HOST, so as to be behind the plurality of VMs.
  • a plurality of data transmission channels are formed between the end device and the Native Device. Since the second adjustment module 505 forms a plurality of data channels between the backend device of the plurality of VMs and the local device Native Device in the host HOST, the front end device of the plurality of VMs and the local device Native Device of the HOST are implemented. Multiple I/O channels between each other to improve the data transmission capability between the multiple VMs and the HOST local device Native Device.
  • the adjusting device of the I/O channel under the virtualization platform determines whether to increase or decrease between the front end device and the back end device of the plurality of VMs according to the average I/O throughput of the plurality of VMs at the current time.
  • the working thread for processing the VM increases the working thread for processing the VM when the average I/O throughput of the multiple VMs is greater than the first threshold at the current moment, that is, increases the I/O channel resources, and improves the I/O channel.
  • the data transmission capability when the average I/O throughput of multiple VMs is less than the second threshold at the current moment, the working thread for processing the VM is reduced, that is, the I/O channel resources are reduced, and the I/O channel resource is wasted.
  • FIG. 6 illustrates a structure of an apparatus 1 for adjusting an I/O channel under a virtualization platform according to another embodiment of the present invention.
  • the adjustment apparatus 600 includes: at least one processor 601, such as a CPU, at least one network interface 604, or other users. Interface 603, memory 605, at least one communication bus 602. Communication bus 602 is used to implement connection communication between these components.
  • the HOST 600 optionally includes a user interface 603 that includes a display, keyboard or pointing device (e.g., mouse, trackball, touchpad or tactile display).
  • the memory 605 may include a high speed RAM memory and may also include a non-volatile memory such as at least one disk memory.
  • the memory 605 can optionally include at least one storage device located remotely from the aforementioned processor 601.
  • the memory 605 can also include an operating system 606 that includes various programs for implementing various basic services and for processing hardware-based tasks.
  • the processor 601 is configured to:
  • Adjusting, according to the increased or decreased working thread for processing the VM, a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM, and a back end of the plurality of VMs A correspondence between a queue in the device and a worker thread for processing the VM, so as to form a plurality of data transmission channels between the front end device of the plurality of VMs and the back end device of the plurality of VMs.
  • the processor 601 further includes: before adding the working thread for processing the VM between the front end device and the back end device of the multiple VMs, the method further includes: Comparing the increase in CPU utilization and the increase in I/O throughput brought by the worker threads for processing VMs between the front-end devices and the back-end devices of the plurality of VMs; if I/ The increase in O throughput is greater than the increase in CPU utilization, and the step of adding a worker thread for processing the VM between the front end device and the back end device of the plurality of VMs is performed.
  • the processor 601 before the step of reducing the working thread for processing the VM between the front-end device and the back-end device of the plurality of VMs further includes: determining Whether the reduction in CPU utilization caused by the worker thread for processing the VM between the front end device and the back end device of the plurality of VMs causes failure to respond to the throughput of the plurality of VMs; if at the front end of the plurality of VMs Reducing the reduction in CPU utilization between the device and the back-end device by the worker thread for processing the VM does not result in failure to respond to the throughput of the plurality of VMs, and executing the front-end device and the back-end of the plurality of VMs The steps for processing the worker threads of the VM are reduced between devices.
  • the processor 601 is configured to separately adjust a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM according to the added or decreased working thread for processing the VM, and the plurality of VMs Correspondence between the queues in the backend device and the worker threads used to process the VM, including:
  • the number of worker threads used to process VMs after increasing or decreasing is less than running on HOST
  • each worker thread corresponds to one queue in each VM's front-end device and one queue in each VM's back-end device; or, the number of worker threads used to process VMs after increasing or decreasing
  • the queues of the front-end device and the back-end device corresponding to one VM of the exclusive worker thread, and the front-end device and the back-end device of the at least two VMs that share the worker thread are not The queue corresponding to the worker thread; wherein the worker threads for processing the VM include an exclusive worker thread and a shared worker thread.
  • the processor 601 is further configured to adjust a correspondence between a queue in the backend device of the multiple VMs and a queue in the local device Native Device in the HOST, so as to facilitate the backend device and the Native Device in the multiple VMs.
  • a plurality of data transmission channels are formed between each other.
  • the adjustment device of the I/O channel under the virtualization platform is based on the current time.
  • the average I/O throughput of the VMs determines whether the worker threads for processing the VMs are increased or decreased between the front-end devices and the back-end devices of the plurality of VMs, and the average I/O throughput of the plurality of VMs is greater than the current time.
  • the working thread for processing the VM is increased, that is, the I/O channel resource is increased, and the data transmission capability of the I/O channel is improved; the average I/O throughput of multiple VMs is less than the second at the current moment.
  • the working thread for processing the VM is reduced, that is, the I/O channel resources are reduced, and the I/O channel resource is wasted.
  • Processor 601 may be an integrated circuit chip with the ability to execute instructions and data, as well as signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 601 or an instruction in a form of software.
  • the above processor may be a general purpose processor (CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or a transistor logic step.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented as hardware processor execution completion, or performed by a combination of hardware and software modules in the processor.
  • the software modules can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 605, and the processor reads the information in the memory 605 and combines the hardware to perform the steps of the above method.
  • FIG. 7 illustrates a structure of a host HOST 700 according to an embodiment of the present invention.
  • the HOST includes a local device Native Device 705, and a front end device and a back end device of a plurality of virtual machine VMs running on the Native Device 705. a data processing module 702 between the front end device and the back end device of the VM, and a bridge Bridge 704 located between the back end device of the plurality of VMs and the Native Device 705
  • the front end device of the plurality of VMs includes a VM1 front end device 7011 and a VM2 front end device 7012; a back end device of the plurality of VMs, including a VM1 back end device 7031 and a VM2 back end device 7032; the bridge Bridge 704 is located at the back end device of the VM
  • the network device or software between the local device and the local device of the host HOST implements the network interconnection between the VM's backend device and the local device of the host HOST, and forwards the data frame.
  • the local device Native Device 705 is The hardware platform running in the virtualized environment, the Native Device may include a variety of hardware, for example, the Native Device of a computing node may include a processor (such as a CPU) and a memory, and may also include a network card, a memory, etc. Low-speed input/output (I/O, Input/Output) devices.
  • the Native Device of a computing node may include a processor (such as a CPU) and a memory, and may also include a network card, a memory, etc.
  • Low-speed input/output (I/O, Input/Output) devices Low-speed input/output (I/O, Input/Output) devices.
  • Data processing module 702 is used to:
  • a worker thread for processing the VM is added between the front end device and the back end device of the plurality of VMs, so that the plurality of working threads are added
  • the average I/O throughput of the VM is less than the first threshold; or, when the average I/O throughput at the current moment is less than the second threshold, the reduction between the front-end device and the back-end device of the multiple VMs is used for Processing the working thread of the VM, so that the average I/O throughput of the plurality of VMs after the working thread is reduced is greater than a second threshold; wherein the first threshold is greater than the second threshold;
  • the working thread for processing the VM after increasing or decreasing, respectively adjusting a correspondence between a queue in the front end device of the plurality of VMs and a working thread for processing the VM, and a queue in the back end device of the plurality of VMs Corresponding relationship with a worker thread for processing the VM, so as to form a plurality of data transmission channels between the front end device of the plurality of VMs and the back end device of the plurality of VMs.
  • the data processing module 702 is further configured to:
  • the data processing module 702 is further configured to:
  • the data processing module 702 can perform the method disclosed in the first embodiment, and the present invention is not described herein, and is not to be construed as limiting the method disclosed in the first embodiment; and, the data processing module 702 is generally The software implementation is implemented by the processor reading the special function software code instructions, and the data processing unit 702 is implemented by the software method, which is only a preferred implementation of the present invention. Those skilled in the art can also implement the software method of the data processing unit 702 by using hardware logic such as a CPU (DSP, DSP), which is not limited by the present invention.
  • DSP CPU
  • the host HOST determines whether to increase or decrease the working thread for processing the VM between the front-end device and the back-end device of the plurality of VMs according to the average I/O throughput of the plurality of VMs at the current time.
  • the working thread for processing the VM is increased, that is, the I/O channel resources are increased, and the data transmission capability of the I/O channel is improved;
  • the working threads for processing the VMs are reduced, that is, the I/O channel resources are reduced, and the I/O channel resources are wasted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • General Factory Administration (AREA)
  • Bus Control (AREA)

Abstract

本发明提供了虚拟化平台下I/O通道的调整方法和调整装置,应用于虚拟化技术领域,I/O通道的调整方法包括:宿主机HOS统计运行在该HOST上的多个虚拟机VM当前时刻的平均I/O吞吐量;该HOST根据当前时刻的平均I/O吞吐量在该多个VM的前端设备和后端设备之间增加或减少用于处理VM的工作线程;该HOST调整该多个VM的前端设备和后端设备中的队列与用于处理VM的工作线程的对应关系。应用本发明,根据运行在HOST上多个VM的I/O吞吐量动态调整该多个VM的前端设备和后端设备之间的I/O通道所占用的I/O通道资源,在I/O吞吐量下降时,释放空闲的I/O通道资源,避免I/O通道资源浪费;在I/O吞吐量增加时,增加I/O通道资源,提高I/O通道的数据传输能力。

Description

虚拟化平台下 I/O通道的调整方法和调整装置 本申请要求于 2013 年 01 月 24 日提交中国专利局、 申请号为 201310027312.7,发明名称为 "虚拟化平台下 I/O通道的调整方法和调整装置" 的中国专利申请的优先权, 其全部内容通过引用结合在本申请中。 技术领域
本发明涉及虚拟化技术领域,尤其涉及虚拟化平台下 I/O通道的调整方法和 调整装置。 背景技术
虚拟化是将计算机物理资源如服务器、 网络、 内存及存储等予以抽象、 转 源的新虚拟部分是不受现有资源的架设方式,地域或物理组态所限制。物理存 在的宿主机 HOST运行多个虚拟机(Virtual Machine, 简称为 VM ) , HOST管 理所有的物理硬件设备及资源, 将一个独占设备虚拟成多个虚拟设备,供多个 用户线程同时使用,每个用户能够看到的设备就是虚拟设备, 物理硬件设备对 用户透明。
虚拟化环境下, VM并不能直接访问硬件设备, HOST为 VM提供连通硬件 设备的数据通路, 即 I/O通道。本发明中,通道包括 VM的前端设备 Front Device 和 VM的后端设备 Back Device之间的数据通道、 以及 VM的后端设备和 HOST 的本地设备 Native Device之间的数据通道; 其中, VM的前端设备是虚拟机中 看到的设备, 实际是 HOST为 VM模拟的设备; VM的后端设备是 HOST操作系 统中与 VM的前端设备相对接的软件模拟设备; HOST的本地设备 Native Device 是 HOST的物理设备。
图 1描述了现有技术中一种虚拟化平台下简单多 I/O通道技术, 图 1中以两 个虚拟机 VM1和 VM2为例, VM的前端设备和后端设备之间有多个 I/O通道(图 1中以两个 I/O通道为例) , 数据处理模块是 VM的前端设备和后端设备之间的 桥梁,用于数据拷贝、数据过滤或是其他数据处理业务,包括多个工作线程(图 1中以两个工作线程为例),工作线程的个数和 VM的前端设备和后端设备之间 I/O通道的个数相同, 且其中每个 I/O通道对应一个工作线程, VM的后端设备 与网桥 Bridge之间以及网桥 Bridge与本地设备 Native Device之间是单通道, VM 的后端设备通过该单通道实现与本地设备 Native Device之间的数据传输。
发明人发现上述现有技术至少存在如下技术问题: VM的前端设备和后端 设备之间的 I/O通道个数是在创建 VM的时候确定, VM整个生命周期中该 I/O通 道个数不能改变,因此 VM的前端设备和后端设备之间的 I/O通道所占用的通道 资源也不能改变, 当 VM的前端设备和后端设备之间的 I/O吞吐量发生变化时, 无法调整 I/O通道资源, 在 I/O吞吐量下降时, 无法释放空闲的 I/O通道资源, 造 成 I/O通道资源浪费; 在 I/O吞吐量增加时, 无法增加 I/O通道资源, I/O通道数 据传输能力无法提高, 系统性能下降。 发明内容
本发明实施例提供虚拟化平台下 I/O通道的调整方法和 HOST调整装置,以 实现动态地调整多个 VM的前端设备和后端设备之间 I/O通道资源的分配,从而 提高系统性能。
第一方面, 本发明提供了一种虚拟化平台下 I/O通道的调整方法, 包括: 宿主机 HOST统计运行在所述 HOST上的多个虚拟机 VM当前时刻的平均 I/O吞吐量; 在当前时刻的平均 I/O吞吐量大于第一阔值时, 所述 HOST在所述 多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程, 以使得增 加工作线程后的所述多个 VM的平均 I/O吞吐量小于第一阔值; 或者, 在当前时 刻的平均 I/O吞吐量小于第二阔值时, 所述 HOST在所述多个 VM的前端设备和 后端设备之间减少用于处理 VM的工作线程, 以使得减少工作线程后的所述多 个 VM的平均 I/O吞吐量大于第二阔值;其中,所述第一阔值大于所述第二阔值; 所述 HOST根据增加或减少后的用于处理 VM的工作线程, 分别调整所述多个 VM的前端设备中的队列与用于处理 VM的工作线程的对应关系, 和所述多个 VM的后端设备中的队列与用于处理 VM的工作线程的对应关系, 以便于在所 述多个 VM的前端设备和所述多个 VM的后端设备之间形成多个数据传输通 道。
在第一种可能的实现方式中, 结合第一方面, 如果当前时刻的平均 I/O吞 吐量大于第一阔值, 所述 HOST在所述多个 VM的前端设备和后端设备之间增 加用于处理 VM的工作线程的步骤之前, 进一步包括:
所述 HOST将在所述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行 比较; 如果所述 I/O吞吐量的增长大于 CPU利用率的增长, 则执行所述 HOST在 所述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程的步 骤。
在第二种可能的实现方式中, 结合第一方面, 如果当前时刻的平均 I/O吞 吐量小于第二阔值, 所述 HOST在所述多个 VM的前端设备和后端设备之间减 少用于处理 VM的工作线程的步骤之前, 进一步包括:
所述 HOST判断在所述多个 VM的前端设备和后端设备之间减少用于处理
VM的工作线程所带来的 CPU利用率的减少是否导致无法响应所述多个 VM的 吞吐量; 如果在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的 工作线程所带来的 CPU利用率的减少不会导致无法响应所述多个 VM的吞吐 量, 则执行所述 HOST在所述多个 VM的前端设备和后端设备之间减少用于处 理 VM的工作线程的步骤。
在第三种可能的实现方式中,结合第一方面或第一方面的第一种或第二种 可能的实现方式, 所述 HOST根据增加或减少后的用于处理 VM的工作线程, 分别调整所述多个 VM的前端设备中的队列与用于处理 VM的工作线程的对应 关系, 和所述多个 VM的后端设备中的队列与用于处理 VM的工作线程的对应 关系, 包括:
在增加或减少后的用于处理 VM的工作线程的数量小于运行在所述 HOST 上 VM的数量时,所述 HOST将所述每个工作线程分别对应每个 VM的前端设备 中一个队列和每个 VM的后端设备中一个队列; 或者, 在增加或减少后的用于 处理 VM的工作线程的数量大于或等于运行在所述 HOST上 VM的数量时,所述 HOST将独占工作线程对应一个 VM的前端设备和后端设备中一个队列、 以及 将共享工作线程对应至少两个 VM的前端设备和后端设备中没有被所述独占工 作线程对应的队列, 所述用于处理 VM的工作线程包括所述独占工作线程和所 述共享工作线程。
在第四种可能的实现方式中,结合第一方面或第一方面的第一种或第二种 或第三种可能的实现方式, 所述 HOST根据增加或减少后的用于处理 VM的工 作线程, 分别调整所述多个 VM的前端设备中的队列与用于处理 VM的工作线 程的对应关系, 和所述多个 VM的后端设备中的队列与用于处理 VM的工作线 程的对应关系之后, 所述方法还包括:
所述 HOST调整所述多个 VM的后端设备中的队列和所述 HOST中本地设 备 Native Device中的队列的对应关系, 以便于在所述多个 VM的后端设备和所 述 Native Device之间形成多个数据传输通道。
第二方面, 本发明提供了一种虚拟化平台下 I/O通道的调整装置 HOST, HOST包括:
统计模块, 用于统计运行在宿主机 HOST上的多个虚拟机 VM当前时刻的 平均 I/O吞吐量; 处理模块, 与所述统计模块连接, 用于在所述统计模块统计 的当前时刻的平均 I/O吞吐量大于第一阔值时,在所述多个 VM的前端设备和后 端设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的所述多个 VM的平均 I/O吞吐量小于第一阔值; 或者, 用于在所述统计模块统计的当前时 刻的平均 I/O吞吐量小于第二阔值时,在所述多个 VM的前端设备和后端设备之 间减少用于处理 VM的工作线程, 以使得减少工作线程后的所述多个 VM的平 均 I/O吞吐量大于第二阔值; 其中, 所述第一阔值大于所述第二阔值; 第一调 整模块, 与所述处理模块连接, 用于根据所述处理模块增加或减少后的用于处 理 VM的工作线程,分别调整所述多个 VM的前端设备中的队列与用于处理 VM 的工作线程的对应关系, 和所述多个 VM的后端设备中的队列与用于处理 VM 的工作线程的对应关系, 以便于在所述多个 VM的前端设备和所述多个 VM的 后端设备之间形成多个数据传输通道。
在第一种可能的实现方式中, 结合第一方面, 所述调整装置还包括: 判断模块, 用于在所述统计模块统计的当前时刻的平均 I/O吞吐量大于第 一阔值时, 将在所述多个 VM的前端设备和后端设备之间增加用于处理 VM的 工作线程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较; 所述处理模块, 还用于如果所述 I/O吞吐量的增长大于 CPU利用率的增长, 在 所述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程, 以使 得增加工作线程后的所述多个 VM的平均 I/O吞吐量小于第一阔值。
在第二种可能的实现方式中, 结合第一方面, 所述调整装置还包括: 判断模块, 用于在所述统计模块统计的当前时刻的平均 I/O吞吐量小于第 二阔值时, 判断在所述多个 VM的前端设备和后端设备之间减少用于处理 VM 的工作线程所带来的 CPU利用率的减少是否导致无法响应所述多个 VM的吞吐 量; 所述处理模块, 还用于如果在所述多个 VM的前端设备和后端设备之间减 少用于处理 VM的工作线程所带来的 CPU利用率的减少不会导致无法响应所述 多个 VM的吞吐量, 在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程, 以使得减少工作线程后的所述多个 VM的平均 I/O吞吐量大于 第二阔值。
在第三种可能的实现方式中,结合第一方面或第一方面的第一种或第二种 可能的实现方式, 所述第一调整模块具体用于:
在增加或减少后的用于处理 VM的工作线程的数量小于运行在所述 HOST 上 VM的数量时, 将所述每个工作线程分别对应每个 VM的前端设备中一个队 列和每个 VM的后端设备中一个队列; 或者, 在增加或减少后的用于处理 VM 的工作线程的数量大于或等于运行在所述 HOST上 VM的数量时, 将独占工作 线程对应一个 VM的前端设备和后端设备中一个队列、 以及将共享工作线程对 应至少两个 VM的前端设备和后端设备中没有被所述独占工作线程对应的队 列,所述用于处理 VM的工作线程包括所述独占工作线程和所述共享工作线程。
在第四种可能的实现方式中,结合第一方面或第一方面的第一种或第二种 或第三种可能的实现方式, 所述调整装置还包括:
第二调整模块, 用于调整所述多个 VM的后端设备中的队列和所述 HOST 中本地设备 Native Device中的队列的对应关系, 以便于在所述多个 VM的后端 设备和所述 Native Device之间形成多个数据传输通道。
第三方面, 本发明提供了一种宿主机 HOST, 其特征在于, 所述 HOST包 括: 本地设备 Native Device, 运行在所述 Native Device之上的多个虚拟机 VM 的前端设备和后端设备, 以及位于所述多个 VM的前端设备和后端设备之间的 数据处理模块, 其中:
所述数据处理模块用于:
统计所述多个 VM当前时刻的平均 I/O吞吐量; 在当前时刻的平均 I/O吞吐 量大于第一阔值时, 在所述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的所述多个 VM的平均 I/O吞吐量小于 第一阔值; 或者, 在当前时刻的平均 I/O吞吐量小于第二阔值时, 所述 HOST 在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程, 以 使得减少工作线程后的所述多个 VM的平均 I/O吞吐量大于第二阔值; 其中, 所 述第一阔值大于所述第二阔值;根据增加或减少后的用于处理 VM的工作线程, 分别调整所述多个 VM的前端设备中的队列与用于处理 VM的工作线程的对应 关系, 和所述多个 VM的后端设备中的队列与用于处理 VM的工作线程的对应 关系, 以便于在所述多个 VM的前端设备和所述多个 VM的后端设备之间形成 多个数据传输通道。
在第一种可能的实现方式中, 结合第三方面, 所述数据处理模块还用于: 将在所述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线 程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较; 如果所 述 I/O吞吐量的增长大于 CPU利用率的增长,在所述多个 VM的前端设备和后端 设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的所述多个 VM的平均 I/O吞吐量小于第一阔值。
在第二种可能的实现方式中, 结合第三方面, 所述数据处理模块还用于: 判断在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作 线程所带来的 CPU利用率的减少是否导致无法响应所述多个 VM的吞吐量; 如 果在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程所 带来的 CPU利用率的减少不会导致无法响应所述多个 VM的吞吐量, 在所述多 个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程, 以使得减少 工作线程后的所述多个 VM的平均 I/O吞吐量大于第二阔值。
可见, 在本发明的实施例中,根据当前时刻多个 VM的平均 I/O吞吐量判断 是否在该多个 VM的前端设备和后端设备之间增加或减少用于处理 VM的工作 线程, 在当前时刻多个 VM的平均 I/O吞吐量大于第一阔值时, 增加用于处理 VM的工作线程, 即增加 I/O通道资源, 提高 I/O通道的数据传输能力; 在当前 时刻多个 VM的平均 I/O吞吐量小于第二阔值时,减少用于处理 VM的工作线程, 即减少 I/O通道资源, 避免 I/O通道资源浪费。
进一步, HOST通过调整多个 VM的后端设备中的队列和宿主机 HOST中本地 设备 Native Device中队列的对应关系, 在多个 VM的后端设备和该本地设备 Native Device之间形成多个数据传输通道,从而实现了多个 VM的前端设备和 该 HOST中本地设备 Native Device之间的多条 I/O通道, 提高了该多个 VM 的前端设备和该 HOST中本地设备 Native Device之间的数据传输能力。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施 例或现有技术描述中所需要使用的附图作一简单地介绍, 显而易见地, 下面描 述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出 创造性劳动的前提下, 还可以根据这些附图获得其他的附图。
图 1为现有技术中一种虚拟化平台下简单多 I/O通道技术的架构图; 图 2为本发明一个实施例提供的虚拟化平台下 I/O通道的调整方法流程图; 图 3为本发明提供的虚拟化平台下多个 VM的前端设备和后端设备之间 I/O 工作模式为共享模式的架构图;
图 4为本发明提供的虚拟化平台下多个 VM的前端设备和后端设备之间 I/O 工作模式为混合模式的架构图;
图 5为本发明一个实施例提供的虚拟化平台下 I/O通道的调整装置的结构 示意图;
图 6为本发明另一个实施例提供的虚拟化平台下 I/O通道的调整装置的结 构示意图。
图 7为本发明一个实施例提供的宿主机 HOST的结构示意图。 具体实施方式
为使本发明的目的、技术方案和优点更加清楚, 下面将结合本发明实施例 中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所描 述的实施例是本发明一部分实施例, 而不是全部的实施例。基于本发明中的实 施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他 实施例, 都属于本发明保护的范围。
为了方便理解本发明实施例,首先在此介绍本发明实施例描述中会引入的 几个要素。
宿主机 HOST: 作为管理层, 用以完成硬件资源的管理、 分配; 为虚拟机 件资源, 如提供虚拟处理器(如 VCPU ) 、 虚拟内存、 虚拟磁盘、 虚拟网卡等 等。 其中, 该虚拟磁盘可对应 HOST的一个文件或者一个逻辑块设备。 虚拟机 运行在 HOST为其准备的虚拟硬件平台上, HOST上运行一个或多个虚拟机。
虚拟机 VM: 通过虚拟机软件可以在一台物理计算机上模拟出一台或者多 台虚拟的计算机, 而这些虚拟机就像真正的计算机那样进行工作,虚拟机上可 以安装操作系统和应用程序,虚拟机还可访问网络资源。对于在虚拟机中运行 的应用程序而言, 虚拟机就像是在真正的计算机中进行工作。
数据处理模块: 本发明中, 在 VM的前端设备和后端设备之间引入了数据 处理模块,数据处理模块用于处理 VM的前端设备和后端设备之间的数据传输, 通过工作线程来处理 VM的前端设备和后端设备中队列的数据。 数据处理模块 一般是通过软件实现, 即通过处理器读取特殊功能的软件代码指令来实现。
本地设备 Native Device: 虚拟化环境运行的硬件平台。 其中, 该 Native Device可包括多种硬件, 例如某计算节点的 Native Device可包括处理器 (如 CPU ) 和内存, 还可以包括网卡、 存储器等等高速 /低速输入 /输出 (I/O , Input/Output )设备。
网桥 Bridge: 位于 VM的后端设备和宿主机 HOST的本地设备 Native Device 之间的网络设备或软件, 实现 VM的后端设备和宿主机 HOST的本地设备 Native Device之间的网络互连, 对数据帧进行转发。 具体包括:
S201、宿主机 HOST统计运行在该 HOST上的多个虚拟机 VM当前时刻的平 均 I/O吞吐量。
具体的, HOST可以通过先统计运行在该 HOST上的多个 VM当前时刻的总 的 I/O吞吐量, 再除以运行在该 HOST上的虚拟机的数量,得到当前时刻该多个 VM的平均 I/O吞吐量。
S202、 在当前时刻的平均 I/O吞吐量大于第一阔值时, 该 HOST在该多个
VM的前端设备和后端设备之间增加用于处理 VM的工作线程; 或者, 在当前 时刻的平均 I/O吞吐量小于第二阔值时, 该 HOST在该多个 VM的前端设备和后 端设备之间减少用于处理 VM的工作线程。
其中, 在该多个 VM的前端设备和后端设备之间增加用于处理 VM的工作 线程, 以使得增加工作线程后的该多个 VM的平均 I/O吞吐量大于第一阔值; 在 该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程, 以使得 减少工作线程后的该多个 VM的平均 I/O吞吐量大于第二阔值。第一阔值大于第 二阔值, 第一阔值用于指示该多个 VM的平均 I/O吞吐量的上限值, 第二阔值用 于指示该多个 VM的平均 I/O吞吐量的下限值, 即第一阔值反映的是单个 VM最 多所能承受的 I/O吞吐量的上限, 第二阔值反映的是单个 VM最少应该承受的 I/O吞吐量的下限。
进一步, 如果当前时刻的平均 I/O吞吐量大于第一阔值, 该 HOST在该多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程的步骤之前, 还 包括:
改 HOST将在该多个 VM的前端设备和后端设备之间增加用于处理 VM的 工作线程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较; 如果 I/O吞吐量的增长大于 CPU利用率的增长, 则执行该 HOST在该多个 VM的 前端设备和后端设备之间增加用于处理 VM的工作线程的步骤。
其中, 所带来的 CPU利用率的增长是指增加用于处理 VM的工作线程相对 于没有增加用于处理 VM的工作线程所增加的 CPU利用率, 可以用 CPU利用率 的增长量和 /或 CPU利用率的增长率来表示; 所带来的 I/O吞吐量的增长是指增 加用于处理 VM的工作线程相对于没有增加用于处理 VM的工作线程所增加处 理的 I/O吞吐量, 可以用 I/O吞吐量的增长量和 /或 I/O吞吐量的增长率来表示。 需要说明的是, 本发明对于具体如何比较 CPU利用率的增长和 I/O吞吐量的增 长不做限定, 这里示例性的给出两种衡量办法, 如果 I/O吞吐量的增长量大于 CPU利用率的增长量, 或者 I/O吞吐量的增长率大于 CPU利用率的增长率, 则 确定增加用于处理 VM的工作线程。
进一步, 如果当前时刻的平均 I/O吞吐量小于第二阔值, 该 HOST在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程的步骤之前, 还 包括:
改 HOST判断在该多个 VM的前端设备和后端设备之间减少用于处理 VM 的工作线程所带来的 CPU利用率的减少是否导致无法响应该多个 VM的吞吐 量; 如果在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线 程所带来的 CPU利用率的减少不会导致无法响应该多个 VM的吞吐量, 则执行 该 HOST在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线 程的步骤。 即, 如果所带来的 CPU利用率的减少会导致无法响应该多个 VM的 吞吐量, 则不减少工作线程。
可选地, 还可以为该多个 VM设置优先级, 使得享有高优先级的 VM保持 对工作线程的独占, 享有专有 I/O通道, 无论宿主机 HOST整体 I/O负载如何, 高优先级的 VM独占的 I/O通道资源不受影响; 对于优先级级别相同的 VM, 则 按照上述增加或减少工作线程的方法进行处理。
需要说明的是, 在本发明中, I/O通道资源包括用于处理 VM的工作线程, 以及 VM的前端设备和后端设备中的队列。
S203、 该 HOST根据增加或减少后的用于处理 VM的工作线程, 分别调整 该多个 VM的前端设备和后端设备中的队列与用于处理 VM的工作线程的对应 关系。
其中,上述对应关系包括:该多个 VM的前端设备中的队列与用于处理 VM 的工作线程的对应关系, 和该多个 VM的后端设备中的队列与用于处理 VM的 工作线程的对应关系, 该 HOST分别调整该对应关系以便于在该多个 VM的前 端设备和该多个 VM的后端设备之间形成多个数据传输通道。
具体的, 该 HOST根据增加或减少后的用于处理 VM的工作线程, 分别调 整该多个 VM的前端设备中的队列与用于处理 VM的工作线程的对应关系, 和 该多个 VM的后端设备中的队列与用于处理 VM的工作线程的对应关系, 包括: 在增加或减少后的用于处理 VM的工作线程的数量小于运行在 HOST上 VM的数量时,该 HOST将每个工作线程分别对应每个 VM的前端设备中一个队 列和每个 VM的后端设备中一个队列; 或者, 在增加或减少后的用于处理 VM 的工作线程的数量大于或等于运行在 HOST上 VM的数量时,该 HOST将独占工 作线程对应一个 VM的前端设备和后端设备中一个队列、 以及将共享工作线程 对应至少两个 VM的前端设备和后端设备中没有被独占工作线程对应的队列; 其中, 用于处理 VM的工作线程包括独占工作线程和共享工作线程。 需要说明 的是, 上述两种调整模式分别对应共享模式和混合模式, 关于共享模式和混合 模式具体可见图 3和图 4所示的描述。
进一步, 在 HOST根据增加或减少后的用于处理 VM的工作线程, 分别调 整所述多个 VM的前端设备中的队列与用于处理 VM的工作线程的对应关系, 和所述多个 VM的后端设备中的队列与用于处理 VM的工作线程的对应关系之 后, 该虚拟化平台下 I/O通道的调整方法还包括:
该 HOST调整该多个 VM的后端设备中的队列和 HOST中本地设备 Native Device中的队列的对应关系, 以便于在该多个 VM的后端设备和该 Native Device之间形成多个数据传输通道。
具体的, 现有技术中, Native Device可以具有多个队列, VM的后端设备 中的数据在进入该 Native Device时会进行队列的选择, 以通过不同的队列实现 数据传输,该技术可以由该 Native Device中的硬件驱动实现。考虑到本发明中, 由于 VM的后端设备中也具有多个队列,因此该 Native Device在通过网桥 Bridge 向 VM的后端设备中发送数据时, 也可以选择 VM的后端设备中的多个队列, 以实现在 VM的后端设备和该 Native Device之间形成多个数据传输通道。 因此 列的对应关系实际上是, Native Device在通过网桥 Bridge向 VM的后端设备中发 送数据时, 如何选择 VM的后端设备中的队列。
针对如何选择 VM的后端设备中的队列, 可以根据该 Native Device中队列 后端设备中的队列, 以保持 VM的后端设备中队列和 Native Device中队列之间 互相传输数据的通道的一致性; 或者, 该 Native Device重新根据数据流的属性 (如来自同一源端或其他)选择 VM的后端设备中不同的队列。 本发明通过上 述选择方式以实现调整该多个 VM的后端设备中的队列和 HOST中本地设备 Native Device中的队列的对应关系。
由于在该多个 VM的后端设备和该 HOST中本地设备 Native Device之间通 过调整队列的对应关系, 在该多个 VM的后端设备和本地设备 Native Device之 间形成多个数据传输通道, 实现了该多个 VM的前端设备和该 HOST中本地设 备 Native Device之间的多条 I/O通道, 可以提高该多个 VM和该 HOST本地设备 Native Device之间的数据传输能力。
在上述实施例可知, HOST根据当前时刻多个 VM的平均 I/O吞吐量判断是 否在该多个 VM的前端设备和后端设备之间增加或减少用于处理 VM的工作线 程, 在当前时刻多个 VM的平均 I/O吞吐量大于第一阔值时, 增加用于处理 VM 的工作线程, 即增加 I/O通道资源, 提高 I/O通道的数据传输能力; 在当前时刻 多个 VM的平均 I/O吞吐量小于第二阔值时, 减少用于处理 VM的工作线程, 即 减少 I/O通道资源, 避免 I/O通道资源浪费。
考虑到根据当前时刻的平均 I/O吞吐量增加或减少后的 I/O通道资源的有 限性, 特别是 I/O通道资源中工作线程的有限性, 才艮据 I/O通道资源中工作线程 的数量和运行在宿主机 HOST中 VM的数量,本发明在上述多个 VM的前端设备 和后端设备之间设置了两种 I/O工作模式, 包括共享模式和混合模式, 并且该 两种 I/O工作模式是可以互相切换, 当满足一定条件时, 可以由一种工作模式 切换到另一种工作模式。
针对共享模式, 在用于处理 VM的工作线程的数量小于运行在该 HOST上 VM的数量时, HOST将多个 VM的前端设备和后端设备之间 I/O工作模式调整 为共享模式, 即数据处理模块上的工作线程釆取共享模式来处理该多个 VM的 前端设备和后端设备中队列的数据, 具体的,数据处理模块上的工作线程分别 对应运行在 HOST上每个 VM的前端设备中一个队列和每个 VM后端设备中一 个队列。
图 3为 VM的前端设备和后端设备之间 I/O工作模式为共享模式的示意图, 由图可知, 运行在宿主机 HOST上的 VM分别为 VM1、 VM2和 VM3 , 数据处理 模块上的工作线程分别为工作线程 1和工作线程 2, 其中, 工作线程 1分别处理 VM1、 VM2和 VM3中前端设备和后端设备中的队列 1 , 工作线程 2分别处理 VM1、 VM2和 VM3中前端设备和后端设备中的队列 2。
针对混合模式, 在用于处理 VM的工作线程的数量大于或等于运行在该 HOST上 VM的数量时, HOST将该多个 VM的前端设备和后端设备之间 I/O工作 模式调整为混合模式, 即数据处理模块上的工作线程釆取混合模式来处理 VM 的前端设备和后端设备中队列的数据, 具体的, 可以将数据处理模块上的工作 线程分为独占工作线程和共享工作线程两种, 其中, 独占工作线程用于单独处 理一个 VM的前端设备和后端设备中一个队列的数据, 共享工作线程用于共享 处理至少两个 VM的前端设备和后端设备中没有被独占工作线程处理的队列的 数据。
图 4为 VM的前端设备和后端设备之间 I/O工作模式为混合模式的示意图, 由图可知, 运行在宿主机 HOST上的 VM分别为 VM1、 VM2和 VM3 , 数据处理 模块上的工作线程有 4个, 包括独占工作线程 1、 独占工作线程 2、 独占工作线 程 3和共享工作线程 1 , 其中, 独占工作线程 1单独处理 VM1中前端设备和后端 设备中队列 1的数据,独占工作线程 2单独处理 VM2中前端设备和后端设备中队 列 1的数据, 独占工作线程 3单独处理 VM3中前端设备和后端设备中队列 1的数 据,共享工作线程共享处理 VM1、 VM2和 VM3中每个 VM的前端设备和后端设 备中队列 2的数据。 图 4仅示意了共享工作线程处理至少两个 VM中每个 VM的 前端设备和后端设备中一个队列的数据的情形,共享工作线程还可以处理其他 情形下多个队列的数据, 本发明对比不做限制。
进一步, 由图 3和图 4可知, 除了上述在多个 VM的前端设备和后端设备之 间形成多个 I/O通道, 宿主机 HOST还通过调整该多个 VM的后端设备中的队列 和该 HOST中本地设备 Native Device中队列的对应关系,在该多个 VM的后端设 备和该本地设备 Native Device之间形成多个数据传输通道, 从而实现了多个 VM的前端设备和该 HOST中本地设备 Native Device之间的多条 I/O通道, 提高 了该多个 VM的前端设备和该 HOST中本地设备 Native Device之间的数据传输 能力。
针对上述虚拟化平台下 I/O通道的调整方法, 本发明下述实施例提供了虚 拟化平台下 I/O通道的调整装置的结构。 构, 该调整装置 500具体包括:
统计模块 501 , 用于统计运行在宿主机 HOST上的多个虚拟机 VM当前时刻 的平均 I/O吞吐量。
处理模块 502, 与统计模块 5001连接, 用于在统计模块 501统计的当前时刻 的平均 I/O吞吐量大于第一阔值时,在该多个 VM的前端设备和后端设备之间增 加用于处理 VM的工作线程, 以使得增加工作线程后的该多个 VM的平均 I/O吞 吐量小于第一阔值; 或者, 用于在统计模块 501统计的当前时刻的平均 I/O吞吐 量小于第二阔值时, 在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程, 以使得减少工作线程后的该多个 VM的平均 I/O吞吐量大于第 二阔值; 其中, 第一阔值大于第二阔值。
第一调整模块 503 , 与处理模块 502连接, 用于根据处理模块 502增加或减 少后的用于处理 VM的工作线程, 分别调整该多个 VM的前端设备中的队列与 用于处理 VM的工作线程的对应关系, 和该多个 VM的后端设备中的队列与用 于处理 VM的工作线程的对应关系, 以便于在该多个 VM的前端设备和该多个 VM的后端设备之间形成多个数据传输通道。
可选的, 该调整装置 500还包括:
判断模块 504, 用于在统计模块 501统计的当前时刻的平均 I/O吞吐量大于 第一阔值时, 将在该多个 VM的前端设备和后端设备之间增加用于处理 VM的 工作线程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较; 处理模块 502, 还用于在判断模块 504判断如果所述 I/O吞吐量的增长大于 CPU利用率的增长, 在该多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的该多个 VM的平均 I/O吞吐量小于第 一阔值。
可选的, 该调整装置 500还包括:
判断模块 504, 用于在统计模块 501统计的当前时刻的平均 I/O吞吐量 d、于 第二阔值时, 判断在该多个 VM的前端设备和后端设备之间减少用于处理 VM 的工作线程所带来的 CPU利用率的减少是否导致无法响应该多个 VM的吞吐 量;
处理模块 502, 还用于在判断模块 504判断如果在该多个 VM的前端设备和 后端设备之间减少用于处理 VM的工作线程所带来的 CPU利用率的减少不会导 致无法响应该多个 VM的吞吐量, 在该多个 VM的前端设备和后端设备之间减 少用于处理 VM的工作线程, 以使得减少工作线程后的该多个 VM的平均 I/O吞 吐量大于第二阔值。
进一步, 第一调整模块 503具体用于:
在增加或减少后的用于处理 VM的工作线程的数量小于运行在该 HOST上 VM的数量时, 将每个工作线程分别对应每个 VM的前端设备中一个队列和每 个 VM的后端设备中一个队列; 或者, 在增加或减少后的用于处理 VM的工作 线程的数量大于或等于运行在该 HOST上 VM的数量时, 将独占工作线程对应 一个 VM的前端设备和后端设备中一个队列、 以及将共享工作线程对应至少两 个 VM的前端设备和后端设备中没有被独占工作线程对应的队列, 其中, 用于 处理 VM的工作线程包括独占工作线程和共享工作线程。
进一步, 调整装置 500还包括第二调整模块 505, 用于调整该多个 VM的后 端设备中的队列和 HOST中本地设备 Native Device中的队列的对应关系, 以便 于在该多个 VM的后端设备和该 Native Device之间形成多个数据传输通道。 由于第二调整模块 505在该多个 VM的后端设备和宿主机 HOST中本地设 备 Native Device之间形成了多个数据通道, 实现了该多个 VM的前端设备和 HOST中本地设备 Native Device之间的多条 I/O通道,提高该多个 VM和 HOST本 地设备 Native Device之间的数据传输能力。
由上述实施例可知, 该虚拟化平台下 I/O通道的调整装置根据当前时刻多 个 VM的平均 I/O吞吐量判断是否在该多个 VM的前端设备和后端设备之间增加 或减少用于处理 VM的工作线程, 在当前时刻多个 VM的平均 I/O吞吐量大于第 一阔值时, 增加用于处理 VM的工作线程, 即增加 I/O通道资源, 提高 I/O通道 的数据传输能力; 在当前时刻多个 VM的平均 I/O吞吐量小于第二阔值时, 减少 用于处理 VM的工作线程, 即减少 I/O通道资源, 避免 I/O通道资源浪费。
图 6描述了本发明另一个实施例提供的虚拟化平台下 I/O通道的调整装置 600的结构, 该调整装置 600包括: 至少一个处理器 601 , 例如 CPU, 至少一个 网络接口 604或者其他用户接口 603 , 存储器 605, 至少一个通信总线 602。 通信 总线 602用于实现这些组件之间的连接通信。 该 HOST600可选的包含用户接口 603 , 包括显示器, 键盘或者点击设备(例如, 鼠标, 轨迹球( trackball ) , 触 感板或者触感显示屏)。 存储器 605可能包含高速 RAM存储器, 也可能还包括 非不稳定的存储器(non-volatile memory ) , 例如至少一个磁盘存储器。 存储 器 605可选的可以包含至少一个位于远离前述处理器 601的存储装置。在一些实 施方式中, 存储器 605还可以包括操作系统 606, 包含各种程序, 用于实现各种 基础业务以及处理基于硬件的任务。
具体地, 处理器 601用于:
统计运行在宿主机 HOST上的多个虚拟机 VM当前时刻的平均 I/O吞吐量; 在当前时刻的平均 I/O吞吐量大于第一阔值时,在该多个 VM的前端设备和 后端设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的该多个 VM的平均 I/O吞吐量小于第一阔值; 或者, 在当前时刻的平均 I/O吞吐量小于 第二阔值时, 在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工 作线程, 以使得减少工作线程后的该多个 VM的平均 I/O吞吐量大于第二阔值; 其中, 第一阔值大于第二阔值;
根据增加或减少后的用于处理 VM的工作线程, 分别调整该多个 VM的前 端设备中的队列与用于处理 VM的工作线程的对应关系, 和该多个 VM的后端 设备中的队列与用于处理 VM的工作线程的对应关系, 以便于在该多个 VM的 前端设备和该多个 VM的后端设备之间形成多个数据传输通道。
进一步, 如果当前时刻的平均 I/O吞吐量大于第一阔值, 处理器 601在该多 个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程的步骤之前, 还包括: 将在该多个 VM的前端设备和后端设备之间增加用于处理 VM的工作 线程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较; 如果 I/O吞吐量的增长大于 CPU利用率的增长,则执行在该多个 VM的前端设备和后 端设备之间增加用于处理 VM的工作线程的步骤。
如果当前时刻的平均 I/O吞吐量小于第二阔值, 处理器 601在该多个 VM的 前端设备和后端设备之间减少用于处理 VM的工作线程的步骤之前, 还包括: 判断在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程所 带来的 CPU利用率的减少是否导致无法响应该多个 VM的吞吐量; 如果在该多 个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程所带来的 CPU 利用率的减少不会导致无法响应该多个 VM的吞吐量, 则执行在该多个 VM的 前端设备和后端设备之间减少用于处理 VM的工作线程的步骤。
进一步,处理器 601用于根据增加或减少后的用于处理 VM的工作线程,分 别调整该多个 VM的前端设备中的队列与用于处理 VM的工作线程的对应关 系, 和该多个 VM的后端设备中的队列与用于处理 VM的工作线程的对应关系, 包括:
在增加或减少后的用于处理 VM的工作线程的数量小于运行在 HOST上
VM的数量时, 将每个工作线程分别对应每个 VM的前端设备中一个队列和每 个 VM的后端设备中一个队列; 或者, 在增加或减少后的用于处理 VM的工作 线程的数量大于或等于运行在 HOST上 VM的数量时, 将独占工作线程对应一 个 VM的前端设备和后端设备中一个队列、 以及将共享工作线程对应至少两个 VM的前端设备和后端设备中没有被独占工作线程对应的队列; 其中, 用于处 理 VM的工作线程包括独占工作线程和共享工作线程。
进一步, 处理器 601 , 还用于调整该多个 VM的后端设备中的队列和 HOST 中本地设备 Native Device中的队列的对应关系, 以便于在该多个 VM的后端设 备和该 Native Device之间形成多个数据传输通道。
由上述实施例可知, 该虚拟化平台下 I/O通道的调整装置根据当前时刻多 个 VM的平均 I/O吞吐量判断是否在该多个 VM的前端设备和后端设备之间增加 或减少用于处理 VM的工作线程, 在当前时刻多个 VM的平均 I/O吞吐量大于第 一阔值时, 增加用于处理 VM的工作线程, 即增加 I/O通道资源, 提高 I/O通道 的数据传输能力; 在当前时刻多个 VM的平均 I/O吞吐量小于第二阔值时, 减少 用于处理 VM的工作线程, 即减少 I/O通道资源, 避免 I/O通道资源浪费。
需要说明的是, 上述本发明实施例揭示的方法可以应用于处理器 601中, 或者说由处理器 601实现。 处理器 601可能是一种集成电路芯片, 具有指令和数 据的执行能力, 以及信号的处理能力。 在实现过程中, 上述方法的各步骤可以 通过处理器 601中的硬件的集成逻辑电路或者软件形式的指令完成。 上述的处 理器可以是通用处理器( CPU )、数字信号处理器( DSP )、专用集成电路( ASIC )、 现成可编程门阵列 (FPGA )或者其他可编程逻辑器件、 分立门或者晶体管逻 步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规 的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件处理 器执行完成, 或者用处理器中的硬件及软件模块组合执行完成。 软件模块可 以位于随机存储器, 闪存、 只读存储器, 可编程只读存储器或者电可擦写可编 程存储器、 寄存器等本领域成熟的存储介质中。 该存储介质位于存储器 605, 处理器读取存储器 605中的信息, 结合其硬件完成上述方法的步骤。
图 7描述了本发明一个实施例提供的宿主机 HOST700的结构, 该 HOST包 括本地设备 Native Device705 ,运行在该 Native Device705之上的多个虚拟机 VM 的前端设备和后端设备, 位于该多个 VM的前端设备和后端设备之间的数据处 理模块 702, 以及位于该多个 VM的后端设备和该 Native Device705之间的网桥 Bridge704„
其中,多个 VM的前端设备,包括 VM1前端设备 7011和 VM2前端设备 7012; 多个 VM的后端设备, 包括 VM1后端设备 7031和 VM2后端设备 7032 ; 网桥 Bridge704位于 VM的后端设备和宿主机 HOST的本地设备 Native Device之间的 网络设备或软件,实现 VM的后端设备和宿主机 HOST的本地设备 Native Device 之间的网络互连, 对数据帧进行转发; 本地设备 Native Device705为虚拟化环 境运行的硬件平台,该 Native Device可包括多种硬件,例如某计算节点的 Native Device可包括处理器 (如 CPU ) 和内存, 还可以包括网卡、 存储器等等高速 / 低速输入 /输出 ( I/O, Input/Output )设备。
数据处理模块 702用于:
统计该多个 VM当前时刻的平均 I/O吞吐量;
在当前时刻的平均 I/O吞吐量大于第一阔值时,在该多个 VM的前端设备和 后端设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的该多个 VM的平均 I/O吞吐量小于第一阔值; 或者, 在当前时刻的平均 I/O吞吐量小于 第二阔值时, 在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工 作线程, 以使得减少工作线程后的该多个 VM的平均 I/O吞吐量大于第二阔值; 其中, 第一阔值大于第二阔值;
居增加或减少后的用于处理 VM的工作线程, 分别调整该多个 VM的前 端设备中的队列与用于处理 VM的工作线程的对应关系, 和该多个 VM的后端 设备中的队列与用于处理 VM的工作线程的对应关系, 以便于在该多个 VM的 前端设备和该多个 VM的后端设备之间形成多个数据传输通道。
可选的, 数据处理模块 702还用于:
将在该多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程 所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较; 如果 I/O吞 吐量的增长大于 CPU利用率的增长, 在该多个 VM的前端设备和后端设备之间 增加用于处理 VM的工作线程, 以使得增加工作线程后的该多个 VM的平均 I/O 吞吐量小于第一阔值。
可选的, 数据处理模块 702还用于:
判断在该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线 程所带来的 CPU利用率的减少是否导致无法响应该多个 VM的吞吐量; 如果在 该多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程所带来的 CPU利用率的减少不会导致无法响应该多个 VM的吞吐量, 在该多个 VM的前 端设备和后端设备之间减少用于处理 VM的工作线程, 以使得减少工作线程后 的该多个 VM的平均 I/O吞吐量大于第二阔值。
需要说明的是, 数据处理模块 702可以执行实施例一中揭示的方法, 本发 明在此不予赘述, 不能理解成是对本实施例一中揭示的方法的限制; 并且, 数 据处理模块 702—般是软件实现, 即通过处理器读取特殊功能的软件代码指令 来实现,数据处理单元 702用软件方法实现只是本发明的一个较佳的实现方案, 本领域技术人员同样可以用处理器(如 CPU、 DSP )之类的硬件逻辑来实现数 据处理单元 702的软件方法, 本发明对此不做限制。
由上述实施例可知, 宿主机 HOST根据当前时刻多个 VM的平均 I/O吞吐量 判断是否在该多个 VM的前端设备和后端设备之间增加或减少用于处理 VM的 工作线程, 在当前时刻多个 VM的平均 I/O吞吐量大于第一阔值时, 增加用于处 理 VM的工作线程, 即增加 I/O通道资源, 提高 I/O通道的数据传输能力; 在当 前时刻多个 VM的平均 I/O吞吐量小于第二阔值时, 减少用于处理 VM的工作线 程, 即减少 I/O通道资源, 避免 I/O通道资源浪费。
最后应说明的是: 以上各实施例仅用以说明本发明的技术方案, 而非对其 限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领域的普通技术 人员应当理解: 其依然可以对前述各实施例所记载的技术方案进行修改, 或者 对其中部分或者全部技术特征进行等同替换; 而这些修改或者替换, 并不使相 应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims

权 利 要 求
1、 一种虚拟化平台下 I/O通道的调整方法, 其特征在于, 包括: 宿主机 HOST统计运行在所述 HOST上的多个虚拟机 VM当前时刻的平均 I/O吞吐量;
在当前时刻的平均 I/O吞吐量大于第一阔值时, 所述 HOST在所述多个 VM 的前端设备和后端设备之间增加用于处理 VM的工作线程, 以使得增加工作线 程后的所述多个 VM的平均 I/O吞吐量小于第一阔值; 或者, 在当前时刻的平均 I/O吞吐量小于第二阔值时, 所述 HOST在所述多个 VM的前端设备和后端设备 之间减少用于处理 VM的工作线程, 以使得减少工作线程后的所述多个 VM的 平均 I/O吞吐量大于第二阔值; 其中, 所述第一阔值大于所述第二阔值;
所述 HOST根据增加或减少后的用于处理 VM的工作线程, 分别调整所述 多个 VM的前端设备中的队列与用于处理 VM的工作线程的对应关系, 和所述 多个 VM的后端设备中的队列与用于处理 VM的工作线程的对应关系, 以便于 在所述多个 VM的前端设备和所述多个 VM的后端设备之间形成多个数据传输 通道。
2、 如权利要求 1所述的方法, 其特征在于, 如果当前时刻的平均 I/O吞吐 量大于第一阔值, 所述 HOST在所述多个 VM的前端设备和后端设备之间增加 用于处理 VM的工作线程的步骤之前, 进一步包括:
所述 HOST将在所述多个 VM的前端设备和后端设备之间增加用于处理
VM的工作线程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行 比较;
如果所述 I/O吞吐量的增长大于 CPU利用率的增长,则执行所述 HOST在所 述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程的步骤。
3、 如权利要求 1所述的方法, 其特征在于, 如果当前时刻的平均 I/O吞吐 量小于第二阔值, 所述 HOST在所述多个 VM的前端设备和后端设备之间减少 用于处理 VM的工作线程的步骤之前, 进一步包括:
所述 HOST判断在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程所带来的 CPU利用率的减少是否导致无法响应所述多个 VM的 吞吐量; 如果在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作 线程所带来的 CPU利用率的减少不会导致无法响应所述多个 VM的吞吐量, 则 执行所述 HOST在所述多个 VM的前端设备和后端设备之间减少用于处理 VM 的工作线程的步骤。
4、 如权利要求 1-3任一项所述的方法, 其特征在于, 所述 HOST根据增加 或减少后的用于处理 VM的工作线程, 分别调整所述多个 VM的前端设备中的 队列与用于处理 VM的工作线程的对应关系, 和所述多个 VM的后端设备中的 队列与用于处理 VM的工作线程的对应关系, 包括:
在增加或减少后的用于处理 VM的工作线程的数量小于运行在所述 HOST 上 VM的数量时,所述 HOST将所述每个工作线程分别对应每个 VM的前端设备 中一个队列和每个 VM的后端设备中一个队列; 或者,
在增加或减少后的用于处理 VM的工作线程的数量大于或等于运行在所述 HOST上 VM的数量时, 所述 HOST将独占工作线程对应一个 VM的前端设备和 后端设备中一个队列、 以及将共享工作线程对应至少两个 VM的前端设备和后 端设备中没有被所述独占工作线程对应的队列, 所述用于处理 VM的工作线程 包括所述独占工作线程和所述共享工作线程。
5、 如权利要求 1-4任一项所述的方法, 其特征在于, 所述 HOST根据增加 或减少后的用于处理 VM的工作线程, 分别调整所述多个 VM的前端设备中的 队列与用于处理 VM的工作线程的对应关系, 和所述多个 VM的后端设备中的 队列与用于处理 VM的工作线程的对应关系之后, 所述方法还包括:
所述 HOST调整所述多个 VM的后端设备中的队列和所述 HOST中本地设 备 Native Device中的队列的对应关系, 以便于在所述多个 VM的后端设备和所 述 Native Device之间形成多个数据传输通道。
6、 一种虚拟化平台下 I/O通道的调整装置, 其特征在于, 包括: 统计模块, 用于统计运行在宿主机 HOST上的多个虚拟机 VM当前时刻的 平均 I/O吞吐量;
处理模块, 与所述统计模块连接, 用于在所述统计模块统计的当前时刻的 平均 I/O吞吐量大于第一阔值时,在所述多个 VM的前端设备和后端设备之间增 加用于处理 VM的工作线程, 以使得增加工作线程后的所述多个 VM的平均 I/O 吞吐量小于第一阔值; 或者, 用于在所述统计模块统计的当前时刻的平均 I/O 吞吐量小于第二阔值时, 在所述多个 VM的前端设备和后端设备之间减少用于 处理 VM的工作线程, 以使得减少工作线程后的所述多个 VM的平均 I/O吞吐量 大于第二阔值; 其中, 所述第一阔值大于所述第二阔值;
第一调整模块, 与所述处理模块连接, 用于根据所述处理模块增加或减少 后的用于处理 VM的工作线程, 分别调整所述多个 VM的前端设备中的队列与 用于处理 VM的工作线程的对应关系, 和所述多个 VM的后端设备中的队列与 用于处理 VM的工作线程的对应关系, 以便于在所述多个 VM的前端设备和所 述多个 VM的后端设备之间形成多个数据传输通道。
7、 如权利要求 6所述的调整装置, 其特征在于, 所述调整装置还包括: 判断模块, 用于在所述统计模块统计的当前时刻的平均 I/O吞吐量大于第 一阔值时, 将在所述多个 VM的前端设备和后端设备之间增加用于处理 VM的 工作线程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较; 所述处理模块,还用于如果所述 I/O吞吐量的增长大于 CPU利用率的增长, 在所述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线程, 以 使得增加工作线程后的所述多个 VM的平均 I/O吞吐量小于第一阔值。
8、 如权利要求 6所述的调整装置, 其特征在于, 所述调整装置还包括: 判断模块, 用于在所述统计模块统计的当前时刻的平均 I/O吞吐量小于第 二阔值时, 判断在所述多个 VM的前端设备和后端设备之间减少用于处理 VM 的工作线程所带来的 CPU利用率的减少是否导致无法响应所述多个 VM的吞吐 量;
所述处理模块, 还用于如果在所述多个 VM的前端设备和后端设备之间减 少用于处理 VM的工作线程所带来的 CPU利用率的减少不会导致无法响应所述 多个 VM的吞吐量, 在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程, 以使得减少工作线程后的所述多个 VM的平均 I/O吞吐量大于 第二阔值。
9、 如权利要求 6-8任一所述的调整装置, 其特征在于, 所述第一调整模块 具体用于:
在增加或减少后的用于处理 VM的工作线程的数量小于运行在所述 HOST 上 VM的数量时, 将所述每个工作线程分别对应每个 VM的前端设备中一个队 列和每个 VM的后端设备中一个队列; 或者, 在增加或减少后的用于处理 VM的工作线程的数量大于或等于运行在所述 HOST上 VM的数量时,将独占工作线程对应一个 VM的前端设备和后端设备中 一个队列、 以及将共享工作线程对应至少两个 VM的前端设备和后端设备中没 有被所述独占工作线程对应的队列, 所述用于处理 VM的工作线程包括所述独 占工作线程和所述共享工作线程。
10、 如权利要求 6-9任一所述的调整装置, 其特征在于, 所述调整装置还 包括:
第二调整模块, 用于调整所述多个 VM的后端设备中的队列和所述 HOST 中本地设备 Native Device中的队列的对应关系, 以便于在所述多个 VM的后端 设备和所述 Native Device之间形成多个数据传输通道。
11、 一种宿主机 HOST, 其特征在于, 所述 HOST包括: 本地设备 Native Device, 运行在所述 Native Device之上的多个虚拟机 VM的前端设备和后端设 备, 位于所述多个 VM的前端设备和后端设备之间的数据处理模块, 以及位于 所述多个 VM的后端设备和所述 Native Device之间的网桥 Bridge, 其中:
所述数据处理模块用于:
统计所述多个 VM当前时刻的平均 I/O吞吐量;
在当前时刻的平均 I/O吞吐量大于第一阔值时,在所述多个 VM的前端设备 和后端设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的所述 多个 VM的平均 I/O吞吐量小于第一阔值; 或者, 在当前时刻的平均 I/O吞吐量 小于第二阔值时, HOST在所述多个 VM的前端设备和后端设备之间减少用于 处理 VM的工作线程, 以使得减少工作线程后的所述多个 VM的平均 I/O吞吐量 大于第二阔值; 其中, 所述第一阔值大于所述第二阔值;
根据增加或减少后的用于处理 VM的工作线程, 分别调整所述多个 VM的 前端设备中的队列与用于处理 VM的工作线程的对应关系, 和所述多个 VM的 后端设备中的队列与用于处理 VM的工作线程的对应关系, 以便于在所述多个 VM的前端设备和所述多个 VM的后端设备之间形成多个数据传输通道。
12、如权利要求 11所述的 HOST, 其特征在于, 所述数据处理模块还用于: 将在所述多个 VM的前端设备和后端设备之间增加用于处理 VM的工作线 程所带来的 CPU利用率的增长和所带来的 I/O吞吐量的增长进行比较;
如果所述 I/O吞吐量的增长大于 CPU利用率的增长,在所述多个 VM的前端 设备和后端设备之间增加用于处理 VM的工作线程, 以使得增加工作线程后的 所述多个 VM的平均 I/O吞吐量小于第一阔值。
13、如权利要求 11所述的 HOST, 其特征在于, 所述数据处理模块还用于: 判断在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作 线程所带来的 CPU利用率的减少是否导致无法响应所述多个 VM的吞吐量; 如果在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工 作线程所带来的 CPU利用率的减少不会导致无法响应所述多个 VM的吞吐量, 在所述多个 VM的前端设备和后端设备之间减少用于处理 VM的工作线程, 以使得减少工作线程后的所述多个 VM的平均 I/O吞吐量大于第二阔值。
PCT/CN2013/080837 2013-01-24 2013-08-05 虚拟化平台下i/o通道的调整方法和调整装置 WO2014114072A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
KR1020137034172A KR101559097B1 (ko) 2013-01-24 2013-08-05 가상 플랫폼 상의 i/o 채널 조정 방법 및 장치
EP13802520.0A EP2772854B1 (en) 2013-01-24 2013-08-05 Regulation method and regulation device for i/o channels in virtualization platform
JP2014557993A JP5923627B2 (ja) 2013-01-24 2013-08-05 仮想プラットフォーム上でi/oチャネルを調整する方法及び装置
RU2013158942/08A RU2573733C1 (ru) 2013-01-24 2013-08-05 Способ и устройство для регулировки канала i/о на виртуальной платформе
AU2013273688A AU2013273688B2 (en) 2013-01-24 2013-08-05 Method and apparatus for adjusting I/O channel on virtual platform
US14/108,804 US8819685B2 (en) 2013-01-24 2013-12-17 Method and apparatus for adjusting I/O channel on virtual platform

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310027312.7A CN103116517B (zh) 2013-01-24 2013-01-24 虚拟化平台下i/o通道的调整方法和调整装置
CN201310027312.7 2013-01-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/108,804 Continuation US8819685B2 (en) 2013-01-24 2013-12-17 Method and apparatus for adjusting I/O channel on virtual platform

Publications (1)

Publication Number Publication Date
WO2014114072A1 true WO2014114072A1 (zh) 2014-07-31

Family

ID=48414901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/080837 WO2014114072A1 (zh) 2013-01-24 2013-08-05 虚拟化平台下i/o通道的调整方法和调整装置

Country Status (7)

Country Link
EP (1) EP2772854B1 (zh)
JP (1) JP5923627B2 (zh)
KR (1) KR101559097B1 (zh)
CN (1) CN103116517B (zh)
AU (1) AU2013273688B2 (zh)
RU (1) RU2573733C1 (zh)
WO (1) WO2014114072A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857464A (zh) * 2017-11-30 2019-06-07 财团法人工业技术研究院 用于平台部署与操作移动操作系统的系统及其方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100578350B1 (ko) * 2002-08-29 2006-05-11 엘지전자 주식회사 진공청소기의 집진케이스
US8819685B2 (en) 2013-01-24 2014-08-26 Huawei Technologies Co., Ltd. Method and apparatus for adjusting I/O channel on virtual platform
CN103116517B (zh) * 2013-01-24 2016-09-14 华为技术有限公司 虚拟化平台下i/o通道的调整方法和调整装置
WO2015168946A1 (zh) * 2014-05-09 2015-11-12 华为技术有限公司 快速输入输出报文处理方法、装置及系统
WO2016101282A1 (zh) * 2014-12-27 2016-06-30 华为技术有限公司 一种i/o任务处理的方法、设备和系统
CN109240802B (zh) * 2018-09-21 2022-02-18 北京百度网讯科技有限公司 请求处理方法和装置
KR102212512B1 (ko) * 2019-02-28 2021-02-04 성균관대학교산학협력단 가상화기술에서 뮤텍스 객체를 이용한 소프트웨어 기반 은닉채널 구성 시스템

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020049869A1 (en) * 2000-10-25 2002-04-25 Fujitsu Limited Virtual computer system and method for swapping input/output devices between virtual machines and computer readable storage medium
CN102317917A (zh) * 2011-06-30 2012-01-11 华为技术有限公司 热点域虚拟机cpu调度方法及虚拟机系统
CN102508718A (zh) * 2011-11-22 2012-06-20 杭州华三通信技术有限公司 一种虚拟机负载均衡方法和装置
CN103116517A (zh) * 2013-01-24 2013-05-22 华为技术有限公司 虚拟化平台下i/o通道的调整方法和调整装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7788665B2 (en) * 2006-02-28 2010-08-31 Microsoft Corporation Migrating a virtual machine that owns a resource such as a hardware device
US8443398B2 (en) * 2006-11-01 2013-05-14 Skyfire Labs, Inc. Architecture for delivery of video content responsive to remote interaction
CN101499021A (zh) 2008-01-31 2009-08-05 国际商业机器公司 在多个虚拟机上动态分配资源的方法和装置
US8387059B2 (en) * 2008-07-02 2013-02-26 International Business Machines Corporation Black-box performance control for high-volume throughput-centric systems
US9152464B2 (en) * 2010-09-03 2015-10-06 Ianywhere Solutions, Inc. Adjusting a server multiprogramming level based on collected throughput values
JP2012234425A (ja) * 2011-05-06 2012-11-29 Canon Inc 画像処理装置及び画像処理方法
CN102591702B (zh) * 2011-12-31 2015-04-15 华为技术有限公司 虚拟化处理方法及相关装置和计算机系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020049869A1 (en) * 2000-10-25 2002-04-25 Fujitsu Limited Virtual computer system and method for swapping input/output devices between virtual machines and computer readable storage medium
CN102317917A (zh) * 2011-06-30 2012-01-11 华为技术有限公司 热点域虚拟机cpu调度方法及虚拟机系统
CN102508718A (zh) * 2011-11-22 2012-06-20 杭州华三通信技术有限公司 一种虚拟机负载均衡方法和装置
CN103116517A (zh) * 2013-01-24 2013-05-22 华为技术有限公司 虚拟化平台下i/o通道的调整方法和调整装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109857464A (zh) * 2017-11-30 2019-06-07 财团法人工业技术研究院 用于平台部署与操作移动操作系统的系统及其方法
CN109857464B (zh) * 2017-11-30 2022-02-25 财团法人工业技术研究院 用于平台部署与操作移动操作系统的系统及其方法

Also Published As

Publication number Publication date
AU2013273688B2 (en) 2015-08-06
EP2772854A4 (en) 2014-11-19
AU2013273688A1 (en) 2014-08-07
EP2772854A1 (en) 2014-09-03
KR101559097B1 (ko) 2015-10-08
JP2015513732A (ja) 2015-05-14
JP5923627B2 (ja) 2016-05-24
RU2573733C1 (ru) 2016-01-27
CN103116517B (zh) 2016-09-14
KR20140119624A (ko) 2014-10-10
CN103116517A (zh) 2013-05-22
EP2772854B1 (en) 2018-10-24

Similar Documents

Publication Publication Date Title
WO2014114072A1 (zh) 虚拟化平台下i/o通道的调整方法和调整装置
US10325343B1 (en) Topology aware grouping and provisioning of GPU resources in GPU-as-a-Service platform
US10552222B2 (en) Task scheduling method and apparatus on heterogeneous multi-core reconfigurable computing platform
US9459904B2 (en) NUMA I/O aware network queue assignments
US9710310B2 (en) Dynamically configurable hardware queues for dispatching jobs to a plurality of hardware acceleration engines
JP5689526B2 (ja) マルチキュー・ネットワーク・アダプタの動的再構成によるリソース・アフィニティ
US9092269B2 (en) Offloading virtual machine flows to physical queues
US8819685B2 (en) Method and apparatus for adjusting I/O channel on virtual platform
WO2016078178A1 (zh) 一种虚拟cpu调度方法
US20090055831A1 (en) Allocating Network Adapter Resources Among Logical Partitions
KR102309798B1 (ko) Sr-iov 기반 비휘발성 메모리 컨트롤러 및 그 비휘발성 메모리 컨트롤러에 의해 큐에 리소스를 동적 할당하는 방법
US9389921B2 (en) System and method for flexible device driver resource allocation
US10489208B1 (en) Managing resource bursting
WO2015101091A1 (zh) 一种分布式资源调度方法及装置
WO2022271239A1 (en) Queue scaling based, at least, in part, on processing load
US20190044832A1 (en) Technologies for optimized quality of service acceleration
Ekane et al. FlexVF: Adaptive network device services in a virtualized environment
Silva et al. VM performance isolation to support QoS in cloud
WO2023159652A1 (zh) 一种ai系统、内存访问控制方法及相关设备
WO2024027395A1 (zh) 一种数据处理方法及装置
US20220058062A1 (en) System resource allocation for code execution
US10877552B1 (en) Dynamic power reduction through data transfer request limiting
WO2023173961A1 (zh) 一种内存分配方法及相关产品
Brunet et al. Short paper: Dynamic optimization of communications over high speed networks
TALENCE Short Paper: Dynamic Optimization of Communications over High Speed Networks

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2013802520

Country of ref document: EP

Ref document number: 2013273688

Country of ref document: AU

ENP Entry into the national phase

Ref document number: 20137034172

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2014557993

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2013158942

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13802520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE